var/home/core/zuul-output/0000755000175000017500000000000015136057141014530 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136062425015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000234031015136062347020260 0ustar corecoredxikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD p~6ri.߷;U/;?FެxۻfW޾n^XC4%_̿f\ϘקjzuQ6/㴻|]=ry+/vWŊ7 .=*EbqZnx.h{nۯSa ׋D*%(Ϗ_϶ݬvGR)$DD D~m{]iX\|U. $ॄKЗ/83Jp ώI8&xėv=E|;FmZl8T*v (6pk**+ Le*gUWi [ӊg*XCF*A(-aD~JwFPO7M$n6iXύO^%26lDt#3{f!f6;WR.!$5 J:1*S%V!F([EbD]娍ԹiE03`Cfw&:ɴ@=yN{f}\{+>2^G) *ɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`岮X7̝4?qKf, # qe䧤 ss]QzH.ad!rJBi`V +|i}}THW{y|*/BP3m3A- ZPmN^iL[NrrݝE)~QGGAj^3}wy/{47[q)&ccNz7l9HGAr Mme)M,O!Xa~YB ɻ!@J$ty#&i 5ܘ=ЂK]IIɻ]rwbXh)g''H_`!GKF5/O]Zڢ>:O񨡺ePӋ&56zGnL!?lJJYq=Wo/"IyQ4\:y|6h6dQX0>HTG5QOuxMe 1׶/5άRIoNbYe獸]fNdƭwq <ć;_ʧNs9[(=!@Q,}s=LN YlYd'Z;o.K'[-הp|A*Z*}QJ0SqAYE0i5P-$̿<_d^"]}Z|-5rC wjof'(%*݅^J">CMMQQ؏*ΧL ߁NPi?$;g&立q^-:}KA8Nnn6C;XHK:lL4Aْ .vqHP"P.dTrcD Yjz_aL_8};\N<:R€ N0RQ⚮FkeZ< )VCRQrC|}nw_~ܥ0~fgKAw^};fs)1K MޠPBUB1J{Ⱦ79`®3uO0T-Oy+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & "> 3J?5OͩLH.:;ߡ֖QʡCOx]*9W C;6)SCVOאUʇq )$ {SG!pN7,/M(.ΰdƛޜP16$ c:!%Piocej_H!CEF L훨bِp{!*({bʂAtĘ5dw9}ŒEanvVZ?C}!w,ƍͩ?9} [oF2(Y}Q7^{E}xA|AŜt;y}=W<*e'&Ж0(ݕ`{az^su/x)W>OK(BSsǽҰ%>kh5nIYk'LVc(a<1mCޢmp.֣?5t罦X[nMcow&|||x:k/.EoV%#?%W۱`3fs䓯ҴgqmubIfp$HhtLzܝ6rq/nLN?2Ǒ|;C@,UѩJ:|n^/GSZ;m#Nvd?PqTcLQMhg:F[bTm!V`AqPaPheUJ& z?NwpGj{VjQS,؃I'[y~EQ(S +mpN, Mq 70eP/d bP6k:Rǜ%V1Ȁ Z(Q:IZaP,MI6o ޞ22ݡjR:g?m@ڤB^dh NS߿c9e#C _-XѪ;Ʃ2tStΆ,~Lp`-;uIBqBVlU_~F_+ERz#{)@o\!@q['&&$"THl#d0 %L+`8zOҚƞ`wF~;~pkѽ)'cL@i]<ք6ym®Yi&s`dyMX](^!#h k:U7Uv7쿻чd)wB5v-)s蓍\>S[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTv)vtT̅Rhɇ ќuގ¢6}#LpFD58LQ LvqZDOF_[2ahtޙ-did˥]5]5᪩QJlyIPEQZȰ<'0D"\KjPQ>Y{Ÿ>14`SČ.HPdp12 (7 _:+$ߗv{wzM$VbήdsOw<}#b[E7imH'Y`;5{$ь'gISzp; AQvDIyHc<槔w w?38v?Lsb s "NDr3\{J KP/ߢ/emPW֦?>Y5p&nr0:9%Ws$Wc0FS=>Qp:!DE5^9-0 R2ڲ]ew۵jI\'iħ1 {\FPG"$$ {+!˨?EP' =@~edF \r!٤ã_e=P1W3c +A)9V ]rVmeK\4? 8'*MTox6[qn2XwK\^-ޖA2U]E_Dm5^"d*MQǜq؈f+C/tfRxeKboc5Iv{K TV}uuyk s" &ﱏҞO/ont~]5\ʅSHwӍq6Ung'!! e#@\YV,4&`-6 E=߶EYE=P?~݆]Ōvton5 lvǫV*k*5]^RFlj]R#Uz |wmTeM kuu8@8/X[1fiMiT+9[ŗ6 BN=rR60#tE#u2k *+e7[YU6Msj$wբh+8kMZY9X\u7Kp:׽ ^҃5M>!6~ö9M( Pnuݮ)`Q6eMӁKzFZf;5IW1i[xU 0FPM]gl}>6sUDO5f p6mD[%ZZvm̓'!n&.TU n$%rIwP(fwnv :Nb=X~ax`;Vw}wvRS1q!z989ep 5w%ZU.]5`s=r&v2FaUM 6/"IiBSpp3n_9>Byݝ0_5bZ8ւ 6{Sf觋-V=Oߖm!6jm3Kx6BDhvzZn8hSlz z6^Q1* _> 8A@>!a:dC<mWu[7-D[9)/*˸PP!j-7BtK|VXnT&eZc~=31mס̈'K^r,W˲vtv|,SԽ[qɑ)6&vד4G&%JLi[? 1A ۥ͟յt9 ",@9 P==s 0py(nWDwpɡ`i?E1Q!:5*6@q\\YWTk sspww0SZ2, uvao=\Sl Uݚu@$Pup՗з҃TXskwqRtYڢLhw KO5C\-&-qQ4Mv8pS俺kCߤ`ZnTV*P,rq<-mOK[[ߢm۽ȑt^, tJbظ&Pg%㢒\QS܁vn` *3UP0Sp8:>m(Zx ,c|!0=0{ P*27ެT|A_mnZ7sDbyT'77J6:ѩ> EKud^5+mn(fnc.^xt4gD638L"!}LpInTeD_1ZrbkI%8zPU:LNTPlI&N:o&2BVb+uxZ`v?7"I8hp A&?a(8E-DHa%LMg2:-ŷX(ǒ>,ݵ𴛾é5Zٵ]z"]òƓVgzEY9[Nj_vZ :jJ2^b_ F w#X6Sho禮<u8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfՋ40oW&&ף \9ys8;ӷL:@۬˨vvn/sc}2N1DDa(kx.L(f"-Da +iP^]OrwY~fwA#ٔ!:*땽Zp!{g4څZtu\1!ѨW(7qZcpL)ύ-G~^rFD+"?_h)yh=x>5ܙQ~O_e琇HBzI7*-Oi* VšPȰһ8hBőa^mX%SHR Fp)$J7A3&ojp/68uK͌iΙINmq&} O L-\ n4f/uc:7k]4p8wWLeUc.)#/udoz$} _3V6UݎvxyRC%ƚq5Щ/ۅw* CVo-1딆~ZYfJ"ou1ϵ5E bQ2mOΏ+w_eaxxOq:ym\q!<'J[FJ,4N:=6. +;$v6"I7%#CLTLyi{+ɠ^^fRa6ܮIN ޖ:DMz'rx#~w7U6=S0+ň+[Miw(W6 ]6ȧyԋ4ԙ./_A9B_-Z\4BG.k7%,ڃo24 ~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X -$sٻҦ62L0ډ"ܺ_z9JNȯ=@oUI y4QE[/Y5d{zrBܖ6Hlc "mKv~[uLU4lZ;xEN'oI㤛rP*jC# 6@dmHg1$ʇȠh#CBΤ{sTQ{%w)7@y1KԾ r`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞tӱ&Jy%١oBbFM=$OQYꐙ^=Zza5a%פG,ϒPV3^KPbGVO'daOU%tt!ƖRG9lhfd#]y=DFT8F}$RD<8 ].v\-v:8F+Mt|ga.!! р#ݴtӫߴ]vWͽ2]Q6Û͘`_}KnK"]p<)Xg '鸽= &Xu=y`g[#ɯO"?5Vg3gR(Җ}f`ӀSqUق0D L?U7_nMBLϸY&0Ro6Qžl+nݷ" 㬙g|ӱFB@qNx^eCSW3\ZSA !c/!b"'9k I S2=bgj쯏W?=`}H0--VV#YmKW^[?R$+ +cU )?wW@!j-gw2ŝl1!iaI%~`{Tռl>~,?5D K\gd(ZH8@x~5w.4\h(`dc)}1Kqi4~'p!;_V>&M!s}FDͳ֧0O*Vr/tdQu!4YhdqT nXeb|Ivż7>! &ĊL:}3*8&6f5 %>~R݄}WgѨ@OĹCtWai4AY!XH _pw騋[b[%/d>. !Df~;)(Oy )r#.<]]i-*ػ-f24qlT1  jL>1qY|\䛧\|r>Ch}Ϊ=jnk?p ^C8"M#Eޑ-5@f,|Ά(Շ*(XCK*"pXR[كrq IH!6=Ocnи%G"|ڔ^kПy׏<:n:!d#[7>^.hd/}ӾP'k2MؤYy/{!ca /^wT j˚ب|MLE7Ee/I lu//j8MoGqdDt^_Y\-8!ד|$@D.ݮl`p48io^.š{_f>O)J=iwwӑ؇n-i3,1׿5'odۆ3(h>1UW蚍R$W>u e\0zE|!@E " ;9Ώf3kZc7B[nUgu$ B6 [^7 |Xpn1]nr CC5`F`J `rKJ;?28¢E WiBhFa[|ݩSRO3]J-҅31,jl3Y QuH vΎ]n_2a62;VI/ɮ|Lu>'$0&*m.)HzzBvU0h} -_.7^nya+Cs 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF};qPU嗈M9VS;a+Ӧ8E8zmMs*7NM~@6 ' 8jp*:'SOANa0rӍ?DT%l)gvN}JT(Ȋqm|dc+lQai,|Dߟ|, d#EjZܴv]pEO7}&gbXԈedKX :+Z|p8"81,w:$TiVD7ֶ]cga@>\X=4OZSܿ* %xccDa.E h :R.qɱMu$ơI8>^V Y. ,BLq~z&0o- ,BLqfx9y:9244ANb n\"X>Y`bb*h%)(*_Gra^ sh6"BzƾH( ."e)B QlKlXt҈t9՚$ضz]'.!-r"1MCĦʸ"66pE{ =CNc\ESD[T4azry !5yY~ :3;Y[Iȧ q:i Ǟ/"8Wxç,vܰtX-LE7 |-D`JLw9|fb>4Nu ߏ3ap5k_JA+A.A~ C~`[KaQ-Ģn9ѧf q:cT >to^ X]j?-ȇlCf0hM`~ ó}0W@o  K[{d+`ze"l |d;L2k%x90ݙ^Oe ]nHfS+.4<#/5߁ݛǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTElsX|D^c%֯T][$m;ԝ!,Z5f`XFzȁ=nrSA8; P=uY}r/27OUa%~0;үM3Tu ȩ*'3IC~LG,?.?C3tBYpm_g.~>3ʄ55[c&-Wgy_jVo,?s w*n\7[cpMY<~/"˘oV܉T6nn \_ߋV_}Z=k-nn sn.*upw pX\_ U-C_wS!|q?E-S_w$-#9?mh{R 4ѭm_9p -h2 dֲ 1"j {]]Nk"䁖%5'32hDz O\!f3KX0kIKq"H~%.b@:Oec6^:V8FDza5H`:&Q5 ^hI8nʁu EA~V O8Z-mYO!tO֠υ9G`6qmJc,Qh: ݢKNw2taC0Z' O > f-`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM~# (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&O1^'}plM)0\n ή ?Cֲa9H] lX9^vCο -vd+OUgRy2Я\ B0!% #>bJPUck\Ul'F瘏Y4Ew`[x٘p,>9V"R1I>bJ` UL'5m1Ԥ:t6I >jz(:W֪Ƹ)!fꠗe[XLE4atGS1px#S]MF˦NJPYDX%ܠꡗhl}i9f?q>b-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋X((6I#z)2S zp&m?e8 "(O+:Y EaSD]<^(]В|Ǚ8"oRs?]\McZ0ϕ!1hKS`h0O{!L-w]ln2&0Ǚ'0=.T4G7! H/ͺ|@lX)+{{^s1V63 ۗI"*al NJ`Q8B\pup6_3XqCXznL9:{o qcuו8`n{ave=}OR9~yL Z1=W8>É R$|L ]OfJl˪VVg:lDԒ͢Zu[kWۗw{{7st08`J0ꨴU1|z:9dX)z2!S:'q9| 76"Q;D*04Zٚ ?V¼r8/G:T6Fw/ɚ~h?lUc3MکEen壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Et _I`[>>0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*I|0]Sj%|-Ԟ = Ʃ%>H&t;9`>$& nIdE Ͻq*nŘʰNҁV_wmrT 8imڜͻ]XI:ѭ83dNmc<3L'FIXUg]OvX&o%զhe1V%ɗxW//nƏUX" xGEWr;+mlʱ4ƪ(L\BI~;ZF5+æyՕu'nϒOyTH?!UajѪAħħe aJ3:8ur-aI mPxĹ>Gyʊ5L{itޤ08Bܧ+|6ڦ}2O?'Ǜ:E]׾S4f蔗JX#PֈX|t<>2]۷Hrm=V_>`9޾1>6g"B5*Gz#K/)2KBohT&s0_\߈笩^,YzG6N0g*(Lwe,k1Ԉ%<T"$V V/z1r~u8Rb;`h;˸t5FQ]TYgVⳈRxҵeԱBi;r-Pc+BI"ꌚ[:VڤRM~0Ϻ0 V xʣzK&$Q,e/ALl5u%;K?G0:+c{j@asS.] my|X8O(l Y% տ\e/ADL:B;Kx7 ̴٘6>7 Q'ݕ<m[ߓ2p X%?2Ƞ=K啈YR+9vѩP tKu;Y]sIF\x08ugo'|%1,FRB3e4IQ؉8v{dʫg伫Nwڪ\ /k\eaT m!RK\/8']lLgi7m RJRvm;{~W,Ldoo[.NK)$X \}2{+VsY]0'Bj=ayD qQ|{oP>CrAqhKxpAa&֪\i?KԋӅWli`;v)e f׼5gA "uJԪ^ʳBc{z/?;GQ? NGYGݺwvqSvڹoT, 0?EwVkX. )q@,oʣ9g(v FXm2oˌAB"7R^Y$*ވ^yyu)jlP7kv s5gYuP(? 5е-1=ϳ%eb-}YJ>/7,-"ؤH.Boa˯oa-ΰ¤PMB[˳N, X|ئ8!L B]ļLgJ{#[vz.rq)yS–vaĀLkua fIboacl߿{sKYUld= L lrzNWfsw9~ $g 10NӇLY&޲+@ ":&㏧m)PQї~RwqfpdPP꿍<,mA6˾e#{ 3YWCϬ?_x9j^wyD\a)PPÌX) 'qB:WCheNJ]vqܵe]Jy.p`|oI6ci̷n〛S{ ~ 0~`9oOqLp>k__=ʗ &5>VMண@Y&0CfY|N~FAqc2"gcLؼR(|ɯN~{=1 &򭎏ik%I~-dJ$u7 ,nӼ uH(nPXIo]H⊑i\Y&RցN( ̳q-E_E,P"0-*N "z$y')G'5mnq9ķ_=)ъCbӣx.&q5pBF,\YQ|VavY23OAJUq$kd>{CkдUeE\/n[]8Dc9\%W=9>^FHMNDK#݅Te Yx)T;MQ<zl h;*zM-Qk^o9U8,k:SŁi?UqtZ%R,i2vçq{ HuKM;OKsV*g`gHx{Kuj,:wq Hb X*q+n(ۮ3+,8ێny*GYEoXm~AǹhD)Ͱ=KO8 2G9DϥjmܮBZZ<<!pq Vװ0T j4 a3Ä2Li..Naۅbu CadC%6WWd)ƚ ιf,WRʳ-YN\ @?MZWQGLa9n=SR-ڋa3<v$”eGtbl֦Èz:["ף@~zl[븱Ri `ho]Ϲ ^] 8huGkI~E U#1ʔaOZoaEr"jG{{Ҩo 0]9e&@1w*J0"{h@[鵏huH`u< ֈ<'4ڝ&:ݪ pnKt~:ZMa(iŦ/\@Tjx9X(E; )W`UtKrDк}\˛zx#yzmԽ+P(Jۍ"u|aSm<@VʮH\*U6TJYGLV"\^BloJ! u=m[b1ޞ#yz`YHR^m4ZHޝeft[Gi[  Gca_ǞG>6x;"pINڣj+OO-h[m-J2O^4(ͭW},e*{{>]wQOS,ϑ2 vaj &IV;ݦ%2EL]cFՉ*A]:+@stL=G}nm[+E3J~`?ҝSirՂٞq<7\_6 F"pCq(0 9;bB]:' jQ pVڛ SH GL_5լnIG뱓h"*$y  bR=. R{>@&byTꐑg(z8u=nQE V|!EtChiHw]k~_v0tUKX2H 9Zbi #-&kbG%ZrylD::Yݡy.ҭ.u.roSg']P6Y!B>W}uXlKeXn@Z3"ojHkJrʱDfR M #.:.bwp -n(pVGy55s ,.qtJ ca,}#LrpSZҮ.#; Q1cXyg܅e30ߊ)R.^&;Q0wlfJ U例/p\Y9'4:[e@mŪ:Q5qn߁$+oP]'(J"U]jyoQF駏Z璯xRaIyݮx ~&^z E4WK>zX6~eQ0|~wCyATFC׊&Q,kt9"Q2p̄:T%B#P<218>!h4\ B)LDڡ[j- 5m+Q [OUU?fDA C)L-;ԲQB-ČydcHq_U;7UcDW(u$@]>T(:΁4.Ơ+SaT L]D_pKC5O]z(=J;Q{wgذ+6p2t@[SL[a;SӴb FTXG~o뢜øSy"m-Bb;+53\Hr6}<4a&ʦ9Rmb{>v(-ӠܩO oO}ڕSb+.ٻmeUq") (d $@D]dswd*qRtA[I 3$3>_{# 5BԶעCTytV k՜xOssb2f$pM % :{#DFG*K:$7dEv% 󇉾|̑3zqT)aӋJrn'q8fQDuq?g575G4:LwY ң~=l_M*YPe 'Y /s I=WlQJPWMnOѩˣ2p~"V=ڤ_L[E'V C\Fept`=;wtz~xãC0۷ K " O 9N?ho7)}8 tG"E 1gӄB<E| (SG928ݩŁP~8JO>P:Pmf{?ga/BH `18|iBzp8ʙW@/1ł4P׃D Pܞn%(V }ID0bh(NrדZzh X _nuT(`g^?n=ƨ- A= F,A|DéG`mI&G%B=݄1nIbzRGKt)P<_л8[nuNA u#Tg0Nk}cǵNO5I+UqYd \r`沧&J#z(I3(IP-3r|/)3ۡoy|@{0Eb7X%ڛa ݾo% v3X$qm m֫7N1 ;>K7^ 2\^$`;}.{=lNnPiM_,nR y%g<v ʥ; }i$ ^,O/ZO%w\NNze‡ijMr)RGWZCLؙn_s<jG8<@4˲D8X"ϷDߊBAf"B9ˀU\ޙ@=-#YMKi#T (<';2bJM\A6RN$D:gSk`7dډ6?av`VueԹf鼚Fu@ +a̪A^HV?z".teN`Sj4(`h6s~g\8+nq~jLb<ȸaAܒCg) DeEj_lCQ4pEjƃ\Ψ !,V_?zdv ?̮cDeYج,gM7(K" bRW%u$a$ nS6W XTuX̫h8`vc,TQިD|撞ڹگ>ZIAY"dN<>;ހl5BG-_p헠TYjIq=XUqu=Z74C>[k"ߣrCId9y[PqzVr12' hӓ,Gwc;W~IlXS'a Ji `0l|pr Z:*aav↱퇩k8thf7Û4]v f;S!{l^Ž;thq# a}dB:Sks|jiu55ߋ2r4OׅFe%"i 趄c݌uC^Neo6ZރV/?E KǶ jD0{ds&#x7m' (=۱vre_8X4Hq'+i>ggW#YZѐÄVd Lړ&\v8 1+hkp?э&pw^8.G;iw'Kz_ZWO3(IUg^P෢w%8~v!ڜ+-`\zu$QX| O>s)\>x6KɊ/UL}s/9"%!ѬaMĈI67* 4ChðG^&I$&2EbAѼ [n /("wջxj«li"J_\~[b"/t‡Er}Ht?vpˆ9ȍb>:LȞ; 8qp? 8cUTvisDiy,X>żSfIz}q'ڢN_1Ÿ܀mluGKsvM" }}R{ \fxZϹ,hkPf $ͺ=xQiM:_~_q>5.tB$nB!)}Ơ-G>wXFz`7 cdt.ĺnGLJ zmN3 6 o@9v$߀РKh<`B.nGhLB e]B eʞI(ۀP%o@(ߜP ]B *IXC.i6.00P)Kx cAh󬸝 u/Bof49㚳>˻`8SyM^%NG+Fnu\@WH17&ٍteVt#)!el9$!z&]6;&|ec<"gYj?1^H̫ϖ_)?@R l4([ óQ89ɨvѫx~\3Nb,!>T! [B",şge^`߆qO!h\0r\C8c 9~X-C<\"p9޺π\?rc\w ga^V(>`k3^T =);dTh'g@̥lNsiL`jࡳ=V& ң64 Qis>A[qV072gYYXsтuz53*" 9)2Q 0#LuS+9KaBi+m#r$*|' P]+m'JN놎EL-b%q|j!2ixH,z V_=KddQ\?hJ?JGz{iX 0 P4J$,:Z`4i|OZX:dF-,TeĽJXI 5^`p7ҵ)mJ0T jr".& ѰʄjĒw Wƴ&$2_-^k%k{.4D?(`_4NO4-뗕ȲCYWf\*B~"ujt{U sAcYNE;@\XnNقl2ѱEW}v$4}=9K> * 8ѴF{dNk;,=Njn_14@,S;'\=)/RUKPe')1_NTv#Xڥ^@i"_;m#N~jM^~=.[^) Ncd}8e\eMK%%W9uiy*D^^-5~f`*i؅zu6a"YKЀ q X H-uVQ j[v;;<-(I 3s ̺'22t;ȓJ jYG`Ol^fվ:]-\:RzuPh1+zt]tY3˕tHL x9! 5=KCBN#ٌj5_@\Yt5z k2C4Se)vxpƿɰ?g{ώνtQxt~bۅ z !Ftá8 uYͯ,$eg]>H{KNHB^ynEYOL._=*~m" dֵy0`pkƹ7mLhFDV]o]2 %E1*gtIko`qNw8W6W, ahFf_(F/_yP?/˫`ȠFtIKol`RoT8WR{^ EW}+q\-2e]g|U HUkJ5UuRļ1lʷ2-z8JՖ.~{CTV< *ݷ=X4,dhJP"wG~?Cy"ɩެ=@ ih^FF"}ge@<Cl̀pb\hG0|/yH1ovXbǂ{fk}œTܚVkv/INd1b W>7TK2,:'33h#+Bkۤ QuBp[ ҁFO .S^Bh;QkiD>?t'c1܎TUvSlAUу`J8^( E ;vJ.Z AlS3K91ةg6;s`DGd}$v1 u}IT. 5@{V՚v]9Cр:hB xa9Vh/ g7R;$HWs84ڊh06F֠E/N;q[Ißi=94,;֓mHEvt1L2Iҡ.% F*~Ť@qD8SlYA~A~ u"<7w8!m*.:Q )>*! a-Qo3f,0q>cCD Nq*?shcl}Zk Ə4_=nRNI08ٶJdȂïF.C'9asmIb \G1ɩBEI'8*0WQ' Lj BK4R@*@]s}m[/k9cL\mY)Ԫ ֛M,Mvhzh( wJp曓1fʙOC|RPEεSY&hyne [#g0*v3ۑq1#|Uo*_΅xaMel^&&)Lw.;*c(BQIz=c ga^X 6& M e@g]YQٷ@ yx0'e)Q6t\Up^S⻓޳89VU0V!ТV}11-IhC4 iz̒jru+h02+YGw8Q(N{Y qxnGJGD?vYUGRIě ~n&WS ގҡŦFL7P$2_)'M- )meǨ֬u&߁@]5)nibZH:9w_}P,85dљQfE IJ"i f |g pN~c?p#{jy ى[9y*jT2{{"80zL99>9a(X.9 'ͮ]z܍ȢU˂fa$`t Bs TuDzn\Ƙ!T&6,:[H:ѽU4ůG y񈝛_).Ԋ&xn 6颱yeF@~gs>m4ݩcW&'RP$ǽq vU{88Vq|ıoOw8\[ GV =: PZi9JJhSV V$zq=ɩ7D5]mVa]42:H wDW5*2'Ɲ ;'U->^Y'>)ӣ(E1^2q$H:|nY1_J2Lj(VU_,9OX^=ss#'3;HX;M=G}~EB#xr]\8̷1Rj]‚.1bsp_7*K}p#.(X#7[a/kβX纪%aY.8%3n?.7c^DS38lg471)>$тF׿vt3@keuB/owQc}n![V_V"%tȂJ̦*6Z4(Y|݆BTmNv&OyAf;VmǶ!VU- ?#H¹aJjp2ӋNY&=z8ZIF: g"zM8Etj.ͪ)-*۔EJ UY&Tvs)=\R;Ǔ.3 {R=* 5 "?j7Qd4T~Orn"AFなD?$.e{^)7w=&KUf& \v9>Ս; M 4^X-"fbM5u”߳80\֫ #7]e-ถi0FE潲 Mt?;c‹(+"KúeHbtÁc#1~7 rYeg"W[OM99ٴ vߞUԲr2,=ד"4uSJe)^m+D֊jyWW˙3srUcXG⎗lOT6ւ~ se$O2߫aq3xna)SR'*dgp&9m|+X 8@jĻL6.R$}EWF9lqUЅ+C&:<'&c8 tvULG ,*'&F =y.:@ T7b3 E\:*%= G|bҫo{]@ՠV5@vIdCG{/3'ar\EIQf; [?Wa6@뵎u%ɠhBQ*^W~,1`&2zà]B؉qtBXUv]Pgx(8OyAdi ^}f&!7;pfr#.GuJmrɩs'()v/1yRFv)L]P8U=@S9ARVQ͙;%y(B`]k~/QI?O?UΆlZV+-OQj^^loNϟesw_-F<_o;wpyooSz<߽ꮃ@z >C5y1y!'z *p;}eB?XM~5k(>x}3^~m=\ɽ/A!ʺ.rFuΠh#0MT{a"?X\ `ӯs7.Tot9Żt1-\ av5]`8?A% d%Ef21Z ` 13a:AK2.5=F?̓[ pf& GM-i`mJ~yOLs[O رy|NkH( ON>?/nwrg՜ !qK&ʧc%MFhp9;HĥkHԒH=lˀIv-Ϭ0ߪUT-;y=QR Q]Ӌ35l5̵%kCWhTñd &F*{G /~qn`ruӋpwnqPG> />fVNJY?>{ Qfܝ7e,Iʗ?EKoőcnU7r+? 1G|~9F5l#*DKʸAQ^i ۏf\=R4l=\%Z5HDk}7B%99933C:'7ǝ!X50:w>95ƨW}t *CkoÔ(r%B{# W֢ $|ZQ_bTS);4͵f4U ( \6 0. PS拢=8*~Q^crL4&R31%#N{Hw0{~exc _izg8>Dn\"W-s-HH@x칊\s0/+3u;W$^v`?g(#s.Vyg1o}\I\[ͥwNfZbV{u5Ec3:}0ѠUwTQ\1NJD-M/ˑd]\p< UCӁUp*V V& U;7*צW"\^nł v^EL`cJǂ[A$.l1դUq"6h, ZMΉQz>? _7K %tրH;1ty5Bq3:j>7;%m 7;qUkH7AI{ ØT=`S0|NTNH%eg/~rBWf%bG>~':zǟ/HpS<7[M]%T汙-IbͧqƔ E4֬54JBit V ?m5C xͅf&܍^-S6szMՑU"|H9&MXkڥ ZiAG-8xӣb;aVq制('=@e+io-aZ"`D'J6iHH#-0.V%#uD"E\Dm5!1miRihãZ^G !rTZR)ڣҊVrEn6(ZrzIF)z4,s \XIK/-?ojSO{e*xq=v lzpe~3Q19tG,ֈITQ̭@-O-pY!_٩pQFq4gkْIJ֒'46¤ ,3IJ狝5җt+mssTy%cHapHhyJz%4ϱDoM-!M5m[fl.ƘoQa7oX2*hi"Fo"5yʼ==^nf^ܵQMnlp@ZzVIqx)Sp*+MT)UsxI"J}Eq"HW"?385mpЬM=_(|[bs4EZ8?"mdKM2[/o"&Pُ8 %N}RЍmxrqv?mVط,bN~Ҷ/5F[߶1v1o76k\wM$-;CCOՆ}qbq1(g< -n ?Um)Wnqeۀ {;,+u ;u {a'o6z!~ޑrVf]C*ٚ!Su^ڵͯ9 А,\?/} vi>c_Zkv6}nM3 *& $Yc~N(wAX,uAa=& WKmwk"Dl*cBu-BmVyyv)j'?\gk X34lA=<4W2$y6iAxcTC'KN_~uet!a=N{0t_Ay5&@wᇘ"aGHWbNdoޚʢYh^W).HT&Xu'+&KNN..xjTnK. Bۿp`s0SOY [t\mMSw=u8װT~\.vRo^ow ݞt4hg0xkg0'VOk3VF '%PTэ>? 0ϧz7XdLm`ynGڞ LQd95 ,,'q-6[P] Yv`xotoE(W;#R%TlcDL>?s5v1]w4l=%['/"vHi(E7{a,tnQ![^oƶ"NН-yOs`>y.V^mT-2cQXJLJTd4MDL5D$DYlMi·N}Je^| (Ԛ oG.4__g4A淋+jXjb[ e2kJRTb7NCu!S`bK]Z3VTK3V^G3½w r(BLؘ n9Ӛ,Ҍ1)6AT b(Zqp̬^Z3VTK3V^C3(۵f,V 5C#$Hb &0,XYR(j c 8N5nE4n54N7F3"q=30d<,KMRak<,)gHe8imӘsF͊qFiY- ]$NFrJ6XSV0xo~Sy DV 0lbl@K#YNMh:9ƨfp38(:˃y6&\3+h٪ս蕟qzWbM(Iy'\EJ}>^R "DY>"x``%~F ^Ѻ'\!^Bl(4.˯/?.pn wgG";v ,؎'WWN$-_|q Dыpl #FiKȢ={Xw%r Y+7chd0hj[Fah4?/?*O(TL[98<e"8\95W:g5$o_$|6* áJkqU[3.qn|;`Fv>zKV`դ囜`KE+mh%J%_7İn;GtPω PJ/- B`QI?AKm.o-[ƯNGv++1ņPbRq;h$rpb q;&'n@u^_`inF.k( RإTY 0["M)XRZoa<c$ɨLgiQ, g!D viK&zORZ1OA\JFxOpkП䚰x{O k-'?"y x81>)E:XeZVĵ])%Uc湭6buM|nSHR&nl)P{Mtᶅ~vO! n$5_Isd%%$Iuf2$uTl=ܦy |WuV Ϭl>Qi{Q^TJ]3I94X $E!Z1f6Hd LM\\4Sqd 4P(mh-I 8ZB%ͧUe" -UDxʗ]ߥ{%)vӎ2S 'gou`/l1`LhӘĊ$G*i"࠹LSAiLR!iq02]-B_EL%A NB~J]ap5qw)^EZ5]7 V'X%((B6##p/ ZIѵܗPF<Æ`KSkLkQK# q ,ciT!kqH 7w,bg8އ bĎ}҈ [ Eg=CQ䯫U%|K& EVP>c ^PgaUx+BRW B(H,ϕ0)Bks5ƹB`g(3BIa+ZQ΢10y= u;F{3{n ъpbR'7d |yX'Xc_Htܟ| z1^RbŅ!Qu3y[ =̔kR``bN$Xt7CXk0B.ΐe lm!9pSQmF KAi;#e*oE>*lh`En\-b,WPL?YJ9BHLq &VsM%AiW610W݇9 8XXQ<s$.[+Dd28|JYaw+ D'I,7V w;0W\d+`c`xpL"4#)9"gmoV^QIQ0"VEc!MN,x ;1 Qes$ la!TvG*n:XUk~iE']FO"YWވ`$li6H Rw3B4,Qr"zFlψV k5QŐScEk "WTc9A|s}ȗdSN[=Ύtz$FGh@vj̀yJB8 ?҇!M<[_?_^pz+͌{9|r;qu ~}}W/6ѣ|͗W |p1@޳*5py?4 D~|~殳xͲ\4b\vpCh(~(t?P ,5C={<#> ^ΗOɱ)t8v]'Su6\{@|^EkN6,$T' fʆ+RלSϲ=LQv(b`2AeNL`;vbJԭV#@ji;X wu;<# XJ4f2{ȧnB;D)#8j1 eN0¨*%5'v!$@o:lryAڲBa %8$Ӆϸ38 'L$@})Ad}V9$S4QaNZ( Ť-Z&G:y 0+qA6XpQϳASm0=( ޸ .Q}qԤ_Aޣ٠%" ZQԫ٠5} g6PC}(yPTjpY <5%W ti-g75^Rګu8_\8 77U VcD+rGǑVi$zc4.3-@rHMV.aiNXdr co@Ɛ#U(^d<ڝ- ߛ|K#{qpq;U1h"l?$GA-z4F >-d_f&HQq!p= Q^zr^޻lz»olpMk&gGF3N~QeK Nj^]?5_ 2"Wn?ux΋4OUnĎI@ t>dz)ojIvyqp0׺VA2A1'c 1q Ɯػ|ܾ:s0ggirvVY4+#ks%$_66\3tEC::`Hx7LLii9̯)#‚c"+8[JuU?&{O, `]{bqpyy5 >Q4D yj~Z?ܠ)Scbы,Gq~-T|C4̨=OĸSudU)Mn D|01|l0Ȗ#8^utPpj;!u3,Gh=يcUZߕ!!C}ng#iw!XuQ0BC:Ddh), `;`/R{hr1Y9\ Q" te!{ DTR5L TI%Mimp(nFLD@ -i$"u(o/$|RށUϳxxW x7o<L[f=;ju/#wnU/69qp(JrOPh{?6F̕6} Rrޣ m A88vKISI|ݿ?=3"O#ICq:|< :>ze4exuۻsף@ifFHLA1wĥqAݍln~L(`| N32FܦF(} Z2e1mXc ۰R1ayʁ-⻾V6#U}nXjJ?n F7S .z >?p~iSZ+k8 +VW$VCcʁjDND b(7e?akBVRV4k% kA阔4Hkx!=Eި %W"]0$ZݰXϛ;T"Os?C@ܧcPX~Z$ˑ1p\R!틑 h?qq=wP~ߤMO }%brEU(J|c%G::E7ט0z| RnX#`㦗z6#Yy wՃșN2GJE׷C<~yLQ;'w j6 =EyczzR9Gt^W!Dk$\m(?Z{]#|KS f>oPF){7cuFZ':H-~ұ]X ^=ұnoC¹Y;9' / 4/.+&,c2¬\CL{v3RRZTHkGߪתvaTy|PIll ZN+>dy}(03O б^[i:zvbܵL2O-Wd0S /5O:/7qegV1yO7+|&ٵ/12_>WXJJ LZMe#/-L*[/yQ@W\*dlEZsdzB˃t΋ynfv鱚ϧ(ú$ᯭm 1GբjnrRV0w{2M_8{v':!nk,&eݗ9߾;e Ws[͜Ňw=Q)GW0 Wgw{xE3,?kbgV̗RXfsP mUr~ʹ``2r1-gQOέҏVsW5;#d5Y}z5X{~2&4stRrfd!a[>VS6Lj *3Vu~js] ~Wir*>@}84L^S /ɒ# ta H>Ҁ'wq)$#i3RV<`mFkD$ڃ=^?Z x[BՂmEx|͍s&o+niq-.+g>Weu[I.J7 {9_[[SO~g|O&_8Mww,Y]cdf?ev4L~HK޺L6Eޥ{yZGRFM,5Jp94aod^ˍ|br & b. p0+Rd\9 #tE-aGYW%gi.J)y2-R"%H묰0&ۨ}&@E! f0}lhvkn񇝗|j >.+q0#(VKX1jZK^&ci5G80w#8o_<.\Y0Kknv/@µ6+k\;pϚ, 3v^xkdjnpWœ]iW}Mڛ( %L\ 0&o|>P_d06~uvQȿ9fpRVJd^o> a1|ꙫ2Ȱӆ2lqʑ) IP ƈ<]͠ߏmҊ qa|I3cL-}/g`gHYyk;]8J-%yǎ? ɛޖTxwW<[$W8NS c`4;faqfZd-i|Ff%G#9])*f2`\e\T $>tK&_mш9 BY[9125VrR |>d-0smuޒG6jRQoꮥ]7S٥*UB<9V(w"gFl58zsy.S-|j|߼]'z^U@3ɸn pKlj6LLΕNj.P_lE`V̧wah珣]}˫ߵkfk1c)f޾>]/"?Zcܶ%˿5M[lJeEU pG6L[FqHB":g?UdzS#z|n>mU-J[gqr10D A)*蒀& D>Yzn~t]2{ .٫˯v`VY^ᴎ`DiL*)uB3Ĥ R{(;_3=3CDf4l=E2ɥmL`1C1yBZHH"iG.rj|9+~ }&I44t*L0Z{׾?YHpU#ǃ@]ljv,c^T+RRӎa!qsAF14!^F#nAK.]وG]D"H d # *&[$N#3 >8 3y$~K1`,  k/D(g? #x칂ۭf l[jsfJG)dN3-w:/=x\R. z9/9- 7|VA5nav\^,<=ˈ" d4WP $mca@Ύ |rLc]'N#KVc JV.B9ܑOnʮ{_#2^{ǪZڹ; 2ZNq&ΐ>e=r<0]w]rTjQEA- P i1cM1|nf#~G-6"$Zl)(L!c!<2ȧn )$e{)SR#kהd<*<F>QjXo͈lx % II[JdVٴϨF5#!rFZVSM:l4:'lTx 2L0K+ފ .PHuq)P?]x77 *HbK\%O mDפ0;Ϝ@CىO=f۷}3lowˠz-סHumsm4/pwv}*4亚_#cDgR0|JIa>}!*G~ @SU.M-&!g[Ԕ7i_gZn626TDǁoYYc(^Co7AS< w%oKZW3^яt=sJvTGI'P]^<7<ȧj̗CykňMYUTJ##˯ƑP c-o @U'# h"iAĥh;|_v@bLZtZ<.GF#!.!{/%ͧ{|W쌹$֘91Q'JN/*ml"Ze6M QB>UaZZ~!&{d̄験]{TCU|R 6 .P/xjf^yaLp*'=0 e}n7ջ\G-%Dް bh5nrKkDgtYfI&äC:<1..lw^1$J(Ud (79 |9wp0}'cH*q@G=H[xg3]=(Qo0A9R`{ƽZg{Fv{%Y?=ʾѴn{Sr}i@.]Aw!e\,YRdi=GzlZyb:*7ȧꚪ e 3n* e'j'D@!!&qDSZ.pD4P%MƴJ,X:ǂN6KLj䨠Tt8pג  "g̑dQAPoכȧҨc䟑[U("fF)lٯ3vsi/-#$$>ٕ68 ͍@.߬p0LDn 5/tAJfOM?-$aXEWL,b i;Lhl?JD<QTK1+Pn}xQ%KiEzK]1Ngȧ*|F=u Zy7{˧q@pJ&8d|,&vΫ^'܇y |gڏk26$^^@dE4JœzCNjMn t}np(x l khW5B:hphܨÒti@ qC-c^c "T*FQJ5f,+5J D\K^cJeCbGEEz#G@z, 2nr|_/?6puH~&6[ĜSjz;Q1xq *k`h9xp{ _qK j,hE3Dq"E@wX| DXT=: 1jrB2tźAuw[*VB ۾MhQG}X<qGP r+zꕸ޸JU,t8*+80b-B c:ȯ@SUNed$'r'}p o)cd҆\H/$Iv2@_8OUyżTǗ/o#E JFIrx G P NP7ȧ\ԃAr(hx;x˧&ԁ߉j%-E u9[2i&RF|Ơ@??ioC$9?-h!睃 c2oGmjl#06^xFfxf| $\_#X6 ꢹ` t;xgo5e29s=b*dnz Ґe-CA>:?J TJ]pA5me"@݈2z`UpuD"veJ*Og^cRjHMTmo?Yf:+ŧa: H;aE3v"q-ѧbnD>U )=:lLT4Slny Zd+#_F(ܬm@xoRjgJ᦯0A1M%Cp` ʖEPO'(! 0׏v6K|k.\AJ%!-W(t?B1 1{-+>/Ǹ{ς;@#)6sHgur(G_vy[7?^6v;m*.fv:#jM _0ouTӖ8' cqnASU>'^i>AO7̔h ~y8 )Z3Ȯ܆<;# ڝcr.49GkSKޯHӮ$x J3t*[2B "*׸"''1}Nβ)y:@׸]of?̃z;|!!3}W~D[Oa=Az-nr@13 1*y?:}L TQFuW xdÒP\$V*RH_B"y:{-Vv[_#,48>ӹOU7Hpu ǯ ewO@0g  b ets^O 䀐OT8 vIA`tyZ:2^zOC#>Ђ>" XǠomDh!ښ0= ڈNA[AD|p}fY?-Үk6𩤆F?qD Q.TD>U57$dKC[4;9UѸM(Mm/>L߈,֛Qs 3 e L],K~p vwv]9\PJBƘf~24y_h΂BiJ`_39O8q}|^k`:k󏦟>gelB|6!|g[C<%c@d=JSy VEtvTtĭ#yT hB[ &|:-^6kPX{̺ف ֌&?{8_NonwP"xnKyt/)ْ[qnSھˇ KzWsI2QfF:SdFYK9i,SB: ,^`rZfH3Ѿ~^8aIyYf(y!FEw_Ct 3'Up!y> l_) fB_N0V^'P͘m ~ )d^m\V^7+T-ox;K](/*FQ;j3?~{: sgdz,?T|+Ȅ~TD`TL+hbn Vї!@Nb1qHp;5.  %ܣG9WGddZjЄ3K\R0!cv}mp:!`N?w/?ڄXē>qݒ#j ϑI]`^[)ڸN9`ϝ J-yZ稑5HxɔErA Ǭ#8&=L) T}Oj乾 ~^fofp84؀v+$Zr#dkC 2gΦIE۰\\}=.24 8,r9 ;$x1b_*rT[ qhNQydiXۥ6Лv|mT0]jCREklx~Ơp|qg[{7xGRzI3 J5~2zא$ ]=&G5\OfeBR"PJaIr6R7zY5;-\ FX&-JDtȺzoI ji&4 jl*eθn(Ѷ+] `p6X./hڈAf%AFlX&4ctT=4K@A %cPbKPה300vwCG]X҃@ Mw6id{=&wM7_#&>w eKƲy6.5d!Hk*yi5X8A$FE29Z- 3v =U"t7 o'd"P?imms)g2T8lL4VwȠ݇ᐱ[!<$G7b,x+X>'P=sU7G=bq9대Ukde,Щ /p)d~Ґ^~%$Rw l vl^ ގ ܎\x.%xkWdc W?D('譯A{!c=xDvnejLP"fC\NXlsD5|-GA`&cj- 2+SGr Ki:)؅^9eQRxo\ab̋T%C9'l$RDY\Fۊ뀿a wZ u8[-ZU sӨO8iDsKCJMRLv32bC' dĪf~g^:)FJs/6YpLo>-@T4Jdavu [M0rAULU~^[llt#":yŴ˧O4lݡ`Kγc!lC)黐F)ؤnòdmmZEt Iqy8tQ, 1 ~oozka`a&c FzWh2W_Wi ˊ=]V#8ZCo;N0<)1 G=.AyYJQ7 (QE q5 w5[C _ghdQJF !ng  ]C]kX1"-#]IRP<&iDi$O:zUdqjE<ڔi ɱ 3fj-XW>EΡW>#˃)9/Q-k(6xF/;['24UɬH@d υsn2eyX,+j]%HBﮮzj]gٶޛnwf p:,] Z?)NcSu';gXځ" BVkeWYxI'Dǘ9Nߗ #Kp/H(؎?%\'cOX[P,VL"f UT 2v:4u/#;@ ;)]6FNǭ jPrm ̡ʘ{ܭB Pwɽ[ux=FE,Qyoݧmyhy$fE99a!Z gH@ta\ɯdz6*F=daN~3] tB+AԤaIY`x裟XVɩ&TY "#*H8SΨ& fN"ZKsA12?KnnHZ -7\ڂ1}Qκ3sc3~%Da).fCFUWR:Jz*-BxfQ@-7v5ѷG"qH؅f\"(i政g a^q{#¼1UP@K=jqo@GOC9>F? *91<| 䔓^E& ee~Dez[O@á0<̾$D?K=ob ^P'ZiqU@FUÞ°A1)1(KVRQMYƈۑVe7 6( JC;[aڋ5kw]{\j_*٭q+sZ/#ԄѽMYxj7W_q|kw=+=_=ez.P>$?n?W?5[!y#YX3 ‰$Nytxoʰ %X URYץg2ͪ-g*χܔY7gw0ݕ~^'|r¯?y/;/>@㬺w: Yo\OZ\.L_U;kSe73c jWG`H꾢ͧp"u}V{xW}"7paR0SYnzŢLk+ReY^(i KT־OM̟ؒ~Xwݦ346Pb~|_/|v_^i%efH*GS&65ytaYYߏ}7g˅G֬EX8fk1.OUi>zwTz67|-p`-T7 o\ڗy7˲ Uj7\o߯كuYYZϫ]zBN~1Gob{q#j>.xRTU.[ٽ/K$8V,Kg&D-53Ujn4FwZ5Q ͟n+֦fp$ݩF .d޼=n6zo8U>.|aa@we9_܀?vk70KPw.@9QGAƧGm~ ^A/&ܹ7$NtbhbB" $r yr7 a+맴ыd/oN $dxV%c"Jg 'Uȅ*Y`rɺ,m""':@rBHS¬ۤH l>}WpBzkj|S(5>\]~c29HrMH* )Dk20nE |{V~)syoLz'=OcD!kT.2&+WP˖좏fuu& obڧ1ZJy QĹ/?j4Gk~ۧ1DxDM);I iE&y *DzjM5c9ZOc/'f=MmCqV o sY4Tzq-KS*S'K| z}"mOP 0Ri5,2*F4^p E0_)#J0%Q0C΀%y74lbYI, -,#IEŲAK@ Kzfvt0 bC?CO?bڷ|1~Q*Vh\LNr4+qjz%'c6rj t`d'fi'8G/]H=ٵA;u\"O xJ3$T+p/`XdO>D) RH-e/;rH Ɠj; `፦SEd%4 9g*˩Tx刬ytIθ{E5X?h1uP93,^KDj>L",H6{"B ~cĩ#98ZB6˝GNJ.@|Fubv#y'u G!TQq{0IԢn8R_IV3W(6gUR+I8m^_Ku t-LFw6.*  =I /H#I(iC3Y\y\NbRc m)pʀ&"mbY8%ZOF3a|8rt*`f$4+ {Mp|+,lyLt-,Kjt{}9FXzTv~\>=GpLֽß׻34#tO"ȼ '+v zgteVKf5pd3H9FKj$J˗"t3 5Tc{&̥"UHMBPqy}vHLghYT#8z&mt:Ta=S'C *%tOU qk*{O,(/o)`]DÊ޹ )O3+m G j,4dEa/A8OQ*#,kJ`mzذ 1kC7el )BrR.6@4 (`&D[C eZ3q~jHkF4SUJ[upFZ~¢/U7u2`ThTNpVf3|bca\~i|;$zw~p&PlR|tnO>O\G t]+́%oJV8z&88;/4 >-203D e9˼N K#;8X_W(e u۾ !g2$aM{91<L2~~.yy*rGıi2"/d[; cCXFg 0NKӗ8dTd儸d^4&2bDgiFNkG3i kT$Ɔ]&<3mzH͓Og I(tRMTZ-28{rb`@҂FE6_Z1}P1 l\?5s[tഡ 2N|\ $=0OPqbS uis I#ϻ6[x,dR͂'j`|ǿ=~|*[qY,ݿ']r%qkb֔iKPʶ_zCmYD6M%cDP5A-, o{ɅƷӈ8u(7H>_nҪ;ٲg˪ Yftn mhOy >6B o>Y8׺!b<8 +܀t.yXCf\kJ77`eȅ'An|lf1;KCY_zqq?5??4^tsd-nӕy:qFM]f!}}??W=/'X ʆ3%4ϞI `Yswv4dv,:A/6j6Fe=G-0]YΗ77`'}Z- 0P|Tf.m)>_Bll j #ZιVNf6!T%hJRrB &Ma M=a7-k$8C%,Ia#@ETpD Ij3Ct!}fѳ54?W g95|gB9meĻ3^EO5!Tje"Ŷ *m.` ʈ> M#Sp@TC1-?]7<[|\Fxyc1yq6a!/PNtJN{vzj>!rYmG٧dL-@pfn+[U/ldfPG4;؇Xecq= bFFDp͞˼z!{S}~4 JcO:pj+KI>[~c$!%5Xܽ~WKƦ~M+`kt!|oFyL?~Nvo֟GW@kăh " %O"_ {pL}m>7AYf^|2| 3ŹʩDR^iFRhh]H!R.i!plcZ+$b"u4JX[ͫR)L*K>#(LjgXQKRZRpLT{yS77᪛%} 5>*˝Ҡi\JgeABt$KhQ>U7wd?ONN?8\uds\%jtb qxmEeޭVtg#m7n])vh{xl+e<6!nc#-/P 8-7aoob%߂#b\2awύ_/ RzS;]8*`hQ$MRݫ,E]rI\4F{ܙFr.jkZ uNQ%f?nOrsf/;2~z(ceRc3RoY,} g=Njd~YOoovѿVao]/ufu' N]knBXm;}Hpp؁ 2 6hb/"ڎS'DF@KDk@y4yw( J_$(5ARJwXeP i,&&<1-8ZR~X9i|9$]R} YMϒEDfulpGLDJ i !1k-G`$1!B%Ah Ar# J"pv@# ! Q0қ8#C pQ²3d( 'Hh&% 98r zq8 fkUI2~tZwK8c};v9bn$zCEC.SyuXvRY$ݒYpevVJ NGZk)Usc9C08DfM",>EGX$NRbQbh3ڨ vy7Wuzr%ip>":jg-vvkBXSs@&6\OG;/ ̒giFmGTtG8uHlJwV(pn> Q۪롧t' 'b  چahqѧ]Xxn§D8(%(qYĸP;BU b( G:ݓIjFYp>=&,pL #ҙ},"wx)GniLcmpl~b AOGg__^&!*Ġ8na7VIJ]Q[; B8]SeɱMಆeYcP-lN:8z|H+hČkB,6(EQh"k A0,C(D?ƪz(՜noPt FYp4>ar )gy7\ol&aZ(sSn 㴲j$зt j}o- Z!R0AWLkcpS]$aZZIø52c0ͨprp)8'Gq,87JMH($]0 *ƺG#&1Ea(c#F3\A|hngM),2hx@|o#HXmg^0:N ZNo$Q_4 OgA +%aһ/K~)ZʥdR:ۋ#3cH#߈]MR:QٛX!e9 xcAt\㰶ј'/Prja(UݤHҽ>QHÞVg$G,jeƸVNK(  g{hONs<C08ejçdw  ȝXb0 <9Ć"H!R` (H6*]4Z+nBiA!oI!$勭5yc"I'Dl;Fap t-szͲ W;;4a?L']tKhޤ!SZ ÀpRQ.7NHؐ傒\4z:+%sQMۥD]0 7 V{7^vs։\4*M#abh"CM ŲQE?H+75ƮC0~\'e<7h! O{PIZӒt?2i1N8o`86G2P;FQ6ćR 9Sk hNk]=4 C$&uh~]}cHJ`u:%ѧn7eKluJu|ӣސi\|9' X/44 # 4y]>)( _s fv2Szˑop;<ĩEBNt-Fap- s4P{M0Zɫ7Býz`:|5I$ezX!#`Qcȉ49(qC( ҋ[( У9NIc9NgםșZ0ܢ~0N"w`V+SbGݐ{T }lhRfnaxlZQiyZ0\$9UCE~vn\#CVC08Mz\&Ԅ"5D=swvH*״OϞp&hyu^nIol( N&=4 CMCO%;4t$C1z2 ˹',E0,:ڝPZt1xi;)}c P?{drE _ⱞ]儛=4 Yq4lh}Oa\~a$JLzD9d(2\S0F&m#dDIGUڧpe/DR5?|Q /Li4lQ$^ai*LVs"$)d짰eV-`߆z blkF*;RJiiwcryW䟖-jy:DF`Fdǟv)t~i =:lqӞƌ^F3ze4]3=O^];K66Oq8 Z(V:3gTB'Sox1ZzNwOnPE7:/^H 8v@p5mQAcg+bԞ| ?E"N Va6GJbt}SC,"zvph3֪M--h'!x_9 jԩ!J[ߎix,H]yمo&uN =Xrw}ls"= &_gx>+Þ Vhn& o}QM]LqTY~ww/U;04$kF6>Ep8U{eu#WP_dzHѿm_UMۀ36y> 8-(pT"L3*E &I'g;>cXhE%xVP[,ŸaOY|sr0./ο>L_+FrKKȵ'Y}zştԿ.7XkOWu}![Yij4i1M5m:.Ztq0!X$8).p\;Gb9,8/ST;dRq,v ax6rB`pgkm[6oc ^qFN)-`o!DK.%ǣ&OV,0>*XEQ=JW"ΰGF j+탡ܾ|}P_-ڔ%32r;'~d&qyw%d{̕b1e#"|(66+/ =zCO#7Kvuj= jw5}-~tD=_7e_f3XL>ȡV881uzy63; fovuEFvJm±\vlZ-䧥,O`XmOX|fv{-l8b)DZGk ߇}MӺG=W,dR=G?O'eLK˜e{8RB7n۳/ۘz_ڎN 4ԛ0!ov ~8M2?Mx3Vl?L7:1nJuXS4vts՟O;^tr#-(d8fV躰jՇ_{w $^e}|:(|ϪU7#Q6ɸmq%0ӬWlM^Kʞ6Z7 ڴ;pyʏ_"-aSItv7:r0hqc&P"Q-: +`Km3xj!yA;#0@uSV<<*7 =0BM`>5.J!HCh\D* d>eks|`יOY!Fvngb>yVČdg>V<2"؞(qq?#v7P| 9$gXkmH !pGCq qľ6)_5HNӔ,el@Ú!&/./qM(6IJ>EBR60 $KYWeVs:1 W˒>ABv4 $ϕRrL"!xAkU3g K]T~( Al)rXvx"0?9- 3І.EB= |50 K^`GM#CXb QBpM%d/eWrLET2ֲ % *1 ! x%s$>EBfWWiS@` |F6)4{'$n !xJiu{^*F!ĵW(\,E% /EBL|J  rߒۧ>EB#s|J" r.7K>ABQ2")!KoTං<Å@ JH0WB1ol|5'wga% ?>%wVqvbT? Y | \ed MZ{ %),1" |8EB𸾩=ZFH {_Yo)Y#Bla77O`HjQjh)zhy+x ,aܸ1"! x}K;9VF\*(rJi6|! x o%|#f1K>EB )'H^S]!ɹ";CD1 a`5BIN)(&e 9(qP(lH^i|s&2׈$'B 9|5B1@) :`,ͺ)ٷiBk !x"]K)|)2d 5cNt.;4S$H-^z. ]7YZHpSMQØO{vm+TXg!g0&Mk~.GtrOm@I_-AWeW:8J BO&\'uW݀djs`&%L[$0ԌB*"r5`FOF8ooLoU3ORs7Zw? xw/NΛ$0;/ZzL \n?JiA ^Ϗbt헭~ h=e.cN;.(F`i ,x$KύjcJaS`{JM-}o[_hi+2zκJ}z\ = 1  yW)*-= HB8aYRQe,r#B<T )Ėp )}|Ǡ=c *ZTǧR'JD_' b @1Ce<%1JkV*iѨr' qCJ78ip\0A뀭4ZNw1yg)IZ(1GeZ #*`\;*NjSBjQ= _꿺t1 ~L<pY }ׯO.ĴgUU? 7}\=amЮbҫ9o*[ djlWUVW"c?,?I`Yɬrհ.37ޯFfY-*[dM{A,hiK|HKj5}_eqNQ}/䝁}&rc6ޡy  XJI@8@'d$AK.MY 4fp^k_p&֟qd]giVvy,xW `U3&/^ 9 g/bbpznek`j?{5Il߄B6k L6+#T͜c%yZd쮹 $~Bޝ/DZ3%ď|7Gtkpl(^q(e!tb,|<^Np񹖌5FWgM`Rv*#}* | 3}50@X!NZ%R9_wLOyTA+, >gK:!FH֚%Uxߣ~keR0Ni=5KF7[jR:lPK)Uo8όfՅr1u.Mj@jhšu^( VRf-JYqR =]?UH’*sLv SDc$%\2NP%e13$VڒݎEEvr&I r*ei.w6!\|i7О!<>HgEswhע-\)Ӝh~Vu/u=oa=/+/ZyKuK!:ƌƞ1aL,1PFXp4+;-/(-Pk9ΖHlO:hfXkY՟fz2XĵSbvkzR;wޭ[em^*av^3a0?ɶcZ<ši\Wkpc[m-6H:F1dFc얎n~i%kVIR[u]j.UڪTm(Ă.UG;Yoj[P'zђssכ$zzpL]ɞ ޶eN׈tN]:.9V|]:.9u霺tN]:.SΩK(޸Kѻ.SΩ0Хs9u:uE"bLd4n]\ՑxlͶ\t4>yXVfߛf#!vе-H3W\( AYB4 蛟W_`!a'j\DZGOfkc?V?bU4?+%/4K*0݋۟Zu|wv?:5z;pa.R(jod-qU_NeCkx#З7:q!E*IeЭ/.U=w9󊊪zqRd>i5[~һs̱?< J+&_y_(:S8'oL <9UyT{kS+o/.њ'#|$w޻xo5zvpsct^䇽& 3?% Lzeg*6`d9V5@>-qg,6MbK"+\3LNԼg8O~6wdd>/Woih4-9 o{S6m@sofOkKBVTm5{3ế4*߂^,n]8n{ [=Pf7@43Ǭd)"r0Ψ )0c7'Â0$8O>sht6SДU;SJ+bvwskpF'K""KfHŸm.JN;K5%ᔇ<"? )gJyaϏ]n\wC[|^jT$Wӯӫ¸F3(?_͙CSb7/wΌd4YO~!Њ^!+Gǔ~J <⪖)zhoݦղv8>IJ[ ˜c#J7@(#vNq؇xQE?gl:ź]cwai=7w j|ŏȡjF/4i^^烁+g=mcf)Z,?\&u`Ɵh4lzx1 ".i5$m-:dj!Imv 5ZejJ]jբR{^ʭW$9ytˣ;9/HIR3Wt"=aXY3A3[;OPJ‚xmR n, J5\bYɍs3 ϳC+¥{LrIP67{.\ܜd#Gَ͢nY9zRdJI0'Ņ%S"-ʈfq5 ;Cs"apbt c( U42(0ܷ~8[?k)a8HmE2Z{\i6dP )4DTٙ,Y]MMII0 v`DDiD*"r5` @;vmׇXy)s=cAٻƍ$UgnI66A20ŲHdf>VzP)6)3#_UWWI-T;N<F_FqyYqm",D_0D",<_g1l8"`G! oDZXǺE? ˵X;0đ]]>0^"5[`>iAf}30tp?x LX>·q7Oyf([`5&6țIVdUP#ȯFJ >Gi"9nY^ oN}S{aQ5wg`+ |[ϪJ, 4WcvG|bzV,ʖ\ ҡ aWW}6U{7_!frCJTF mY X< @tq+#|waf3oG/fs0r~w+<-6!oGZX.ʗtͻ3rntxb&&MgģyOIyg2: f~}opv?/wm_(U,>VEŽ_7&1lXlm)d/4/ԴB*t%ժt]aG5놊^wr ܫ#V[K#V ]u4!e@ӃX) ᠿ}4b|e4RkSto> a4Ӗ5m g% j`V*p0oB3a+gb|iH4TWNSwS?d:j_o*p2}^zsx?[c"fZ+`p^k{$NC]=AZ;z U>*t`K?pÇ |wşpw7y[pwS*_A /3_zԲaTSS3/irn׺/[lQ~㧷;'jV1֍\[Bn_׆m~ C`쭔L0bRf=R^{Z8)TFb2S4vb>^}xR]$k5Ƭ5]T֍?&X L^OmԵ~;}̴#mb\®<1[Ku0f&6I ŷ1?1Q@u!k''"'_ϮE9mb{TBv}VNRe9EJ%} ޱm- W6%CRឱhSb$V0bK~wu<`<JbJ=^#X vi(NjQɉ_0NVHgCxۖ$`f[nݧ /X?c 7x`oT IYx"D\*ľ(3,Djp]XoU/_7 ~8o<4cw_JSz:_&piT!{e7 }]I(&?yzh'Cpi-A,ѧ)lLc& wG!CEKxPIPX)J S"!& _sSGI0 GaP:q:y]i ˁ|5NADY .sT>r3r7k ֗lC4<+8v1pC&eX3e[b ~6yLj2g\싏8 ݕޯ;_'-Vi-=|G%ftӄC^M*8-MA6ƒiCޫ9BfM )Pʍm,Ƹ#aRSq )d}7\E'3pP x` ݗ`O/7=;Ô]77 J__y]XVDzb <) & K0&VYI8M .^\8^cҎWG-5 k>P>EWAO^#<aG>qv=aZ L9ji=^ԫmGGj{j1bPU6D@sL%B;Iq%KzPgR4ޖ<2A~ݻ4+yXyrXufoƭ}.cRΪܥ^ߪRhstP1}Dg&v'lZl9j/o:mZ_9`iDJ W:, ,/ T,xQA dMLv ٪Lw *SI-JM"ѱOwoBly\,ouf"8]/vGh4mVI| H|;p/`h7 az䉮GZqUy9!.e:~V[I8Bj፞ `%]U12|ECJ0kYZvz[]bq{Khͦ@'m."x-ˎzPyPI%ӴqPëZgm=ҝmdj3n> r:`J4؍F2+zh*5eDL oXH8 H7LH},tq1%ju3-fcҬ#Zyeq1њk5Bk(3`=A97X2NP 8dRqA_$77]}Y 89c{z$N=·uO`fƩ|Zq4/? `Lƾ2_fW,ď)"▱mz'n_0~Lv:&/kNUec7 <Ֆ򗙏a.hL9(dd7d \bM/ߙ?Ndz|_q*vOv7F^`C}ᱷ c̱d>}5TWdߓQ&Zr8^fx;$\8} w7kij}4-9vz/:VaJ{DV^=:;, 8xpVQRå T H)K=0giZIg5s`Wq-w>aӕ)k0קETf> ي(މj%տsv}!5CkoY{;P~Z,;( =_6y_D3e l-yz"_o6\ث4TSt.x/x )> S:cƌyLJS-ȏMFSX^,cj'c^ں3>ͯ>2`HjX<Ÿ9J!;l,}F(J1,,O-3嚐J3i6 vmy!):s|ߕ~K!M({RiN9e% S5sfFڹ~w]7kkn_vA ZfbFhRN0+X*NeiϴjBl94*T0wKLGNO`kaƦ4r}c'~ #$$9X!1M%#!n$Z^`Ir,a M챊e6wZwX bus|R].\C\4ٽY??ɾFWyU f+\,oM\pXHQ*V$s%ɛ_/;D˷(je;\}W0n:L&bõR9Q:VSSkJ}p3:ĺX vVkr9 R *Pjs06* Z>T[8k׮̫>*ói}si724=:: :L mFGL,48mN+D!XJL"0"5z(eRQrN!dC4` 3!W5~ #,e@.&}@)Nc)kop8%bb1g>uðqapG:ÝEHC58 +弮VyؘfFf#6o*BU6ʃb RL2AV*5ޔ nQ28c]3qhɜt#NXrJmߜȿC*m&+jkY\u2"["mʏ؅${;w x[^8dohq3l#P{jg7{r}ojwx-<1nQ5p ߽qG^q*;Ou]1-5- pfYS4of Ⱥ&7~A%@w~`U y$gEVsiek:6ڧυ=Ğś/UXۃwEq)$eStJ7lGm6HCהM^l\"&yoHKȬjޱ "7̖>|[7eS(k+4]Y~)>\-SGV3$G }~ + -#fIlU_6D@qL%B;Iq%ԚMF4aH:ÖR;.K~ř|l(m=y%\a.mM]ou@_1y+p Py;xV,z1xW,BOu~{}8_KTA{%Iue13@Ōd2O=r38ZWe "hi%HfY4ak+ƶӱ(, E':0jړ텽=XW.&]Χf@P& -,IV%];&.tDle84lJOlxydʅ||of9tOEM{_w'UϺZ mK`k0K K:1є  99?72 {"+f/CE2"1@!1kuSJrihcJr*Uv&Pz<œ%}y-h酳b1Rtm>߻J?LJ}ve*0PCOƞ_uS7*nчQ0i g],]&Xcu6Ȧ^ \-Z!pRӰdØ1hTFN ll3];~/ޝ}ۋ_`.yv`x}30)7I;NOZ57*E9o/is Qzt+.>^a既 8\!k07+@6u Sh6qCT)݀Zn6CyfRAZ9Þ*P B1 Hj䝢4 zꙺKCx} ؁qY$jKH̥A*E840HkօPhGmNo'h{Gu-^OyN]g79ןXt:t<sCcj/Etm,@ZYV}q$9 w_%7ih 5(O+Lwy"V$o}?U2.3cN7F>f^2LG l5n:2,"AkKHF0V$<8Au0Y;ג)E 0ӥ{%ЯG~_;ĬW逰 OjRz\Tkn[k2^U[FK RAA*-T-mې TG1&dN6r-c1KA\x LqY̙C:jXXLjAuu`Fz!e`k.PHbh՞2#,RFXp4g툳9 FSkҝ<]~y 4w?7:baƘCx@!\IkF~#jaR!rd^Hox:Zz lזk~kŶCZBZhi5RmݸT]WP(#E_^=Osݛ^h85<2ƾĪ|]͍6jnVrUQTTRqxYH@rjf(a7cFe*8\DPaexPQC{Njr>/ >nw4@ZvWIR_cvѮ`@{TCsa/XeHȪYJwe}c@ZSf J7(m|c_oboV5::3^RJǠ_ p! K*Ljd<)V2""&ZH0<)c"ҭU]L+0 qƽyQ~T,fnaKLӪʖ;"Y-ٝ߶NcjE 62( F刊`) NPB2mJ=U|Mن^,~FWwS NƑZn&~,Ya?ߴnlg?jJ&ӗԜ.VSԜRV& aד:XIwƴ ;O={'u ;OBQ&c$۔|$d̒u.p9.s 8/#XYog)B"Y^t};;~8TJszN!D!eְ@ZXpL"^ˈiD0 ihvf%Vb㙄bV_)]j+ IMR,_vGE3OKKB9UaaR"eE/`r!sǺ>' |-793Y؊/%Srޞ3U(֢t\(#d+8>jK*Ffd j0DVd,tm)]s&7tjKgss[0'z 7+&I (LxN0+D{-"#a:0- 9KaUIXBg `*,u4H&$!ըB=rl4lYst2+10BB23uS/IN Y$53'!c NN:I$)%EKEj Xدn*[VՋpy>Л^El"nqξB-նSyr=|LϣpOKAI! E$dgn7RqAt"dH. qo[1 cE .PDM Jmd&PL0ԸwT4e)Ӭ:{8mc׎Nj ceQn@);3jRY&YtZm7;_e܇:FܕKuxR<])OW+ӕ9h<h){qJoß%o_׎ Bղ>M<~Jq8~p  : tt/׎4ߗb:o$ %=WYw=`Of=[S;sv^꫓ a1m;kg4*NC!]G.`=1TT $NtcFKwuh%fb{訅nPeɱ:i%s([Hxʳ[w "s)Cִ4|zgq R`P#.G$jg¥1 X3w;z* %%c88/LwPҠmPV# XZ,3BRke909[]ه.9 Gub5g<:E ҋ1a"҆¥k_M狓Ԕ]`z{.o gQ. ѨiQ74Q_'!gp)YSFiKIAKb8Ink;V$q$⨸=!i#b%VmU@ ٔ00rukF4QAJr1:bHETB@Ѵe9k-嬔_RaTYFA:`it8X0Ȥp~Ц鮊b: Nψ)ځz =}`W60%L?{Wȍp_&nܗydn$`:yj+%[˯Œ6K nT&Ygy'85ps$#z̈́/y@lPMiO:.~;~VHS;2xK|@',%%pƑe㘄 Ơm.p-fIYkA1R˸ ௹9##`A^PO% c$(mND`mP O`(_6yB=0f GN8 2aMdȃ<JȧK0T(ڑMB"v~ 3|/xi_dy2:v#:PHE@B s9x܌tf`~22]{hn:Mhߵ.:_ L={ii5;@}ؿuyNO ö\yQymXŐ LT34.Qo=?) :dN1Ez8lM-ǝڎ5 4>jMەIRk}o5'I}@U,={ٵ1)Dcro%ySfOݘ['z@y'qsnǣ|;4qa~ۼxl9IsNN 񝤂2X؁#m-*NO+\Yzla< _z|Hae,''?bȻ_हC}9+ہ>inpq`ҙ4IE#x*+Wr6IDd^S ,>e$xZ6tyB#$.ô5QZ/QE)3ZTX/M0&K4益\o|Hg*3%m\gؕt`w7O,fؖ#Vrh?d@ 6DH*"2$ Kv /D[,iO^E՚ug൓+jHs="Ѻ|Y)Q%RMM՗?.=f~8ǖFGʕ\Ƹ+-7>&܌]8}J}Z ^gopnդެ5>}@+ )aR/1",DDꥦ0+Cʘt`A7KߎͧVXۨ6ލ󕹟^͞Y$J\Әd͵c&PfT<lrD`p'%"u9qAOhcSUw'+_kgl$/Nّ2ũ_~_fPkͪhL*'qqVa1J*KmJU9/~y.Da 㔲fƟJQY l1WtE[= yz"_Y杛f&1e&#QHYu2LwZ D佖豉֩܊W.GC㰉.5\ ŝײ*KCR5jxT~8!,$S{+%R.7_lu).3C~e ~|h{t3,d`[8+|6B}zB))&[)Q[W12#Xf-ZEފq(R+J6OQ -g_ L>Ow;gTf0}'Ӳ̂f VQkIS@'Xj)(2ZDJGt`ZD"rR6²+4,r!c0)fe  D9>0Gdr5舥Xѱ7cZadPg($^2"YobA% 9Ij$f.5ȡc EӊMRNZ:P4YCY}zUđ⯏g7/xqn MvV-agAŕe.%iT6JhTB heeޜVI5QS:nlMh7t9?<7:Mn8tt$tD({aWˊy8f]2ԷC~N:")f׎3Fum vx.JWgpvjJ uGl+1`o2IzK^\KTW|!LMS/ó/L"pr^qGZ願] ԓ܄/\{+ѝ'WaB ;QN$41~C[i#6,CV+[zC'>H/qȥoJ3'3CNC*J,`]{m[V!"*}]鮷4LQvɔ҃y;( 3a=s,Qәr_ΔDl;qn/AHNX7`mAzj!ԕ̠gjfCuW崿YQSv,J,(2Ok[?*{ V U`he)1H)h^(G)IRPr ӈB6FI#1Rс1 Y+{` {&CD{fhp8%b"XZYh6dm9+{8mo@}r=m^תT´,76~[>z9& [pz4W=1&H1 9ZQj<3Trg[bآYw^zT㙿M$j>]g Ug ŮقoKq#a܎~Iex#8+gtdn@2rέ#UW V\)+"DHxŕ,m {'(XZw_[33:_uE\,6$5 :z&g `x~t5{z𳈉݃.Q v75Kaw|<'wx41\rB[b[S{wpǻ'_<{}Ňu9•t׷sĚ;Y9 9Q\ |V 8ЩՕi:P f~un*ap>k}:_[஫05"?Md|e[ 'VϺ7!+b )-lFw̨2h 1&D((fgCºTKDŽ O1jveZY^jFe˶K9EhqVwVdt?vMxu;Gc;@}]-`|CeqR\8\0(rW)%g!9$0OrmeeHw0] RrȺ  QiƝVc  (B' Ls{>;S|UpץF! G l!yo &XQ4 aj zY^"G+~(JkOv+ʘA( J$"+/ID\'aTN8.+Wj7Yx>Al)'WBg%?'.hJE]={Q aa: #P71$o wJ05.|]:;,Ϟ"蟯qͶ-"GoÙL_0=v8p  [tv)33;|6&xs[ ?.:mqkrTarrg H]]$) iZ>%u~cz>;{HT~j;ܮ0(I}>L\`^Mc8^8݃RX@ZrN{1#>웟/O_g[RܘOa/f+E j_+3\w3f˜7n!rN*,@}= (,h8 aMkݳ.^i֦w%b^u*BGKCO`t"~po4xjO g_\ o?cûO~px~zVo@iwָ%E  x4 S\S|b-([ f^}>Z?}CwKZ+~nO\~ts;۟t`5+'ЕůAojo9*rU[ < t.c[=scPc€ €61[S(F;i3\<._{Lo"@2qSCV 5PDp ٥oCIoK$t{lIR9!q[k 8l :%*I% Q ^rе#刜ņ;M|~g ;i7̎i gGh#9YT^3A%dDD02W9` N˾"%!N HSQR@*yT,2%ژ#eBxdQV,t00f4%D*@pjT,e ༆(G=Y[Eg60ְh!CZ}"P̳{ i.%޴y&go+;H-|So3e9}-w!&#_Pߕ@ɹ&DcPYo&'tXbą /`!ɓ*В -fO:Nj<04FJ*OEJc߻^Ð%m܆f= 'P eڢvsxvOPٺvO{ԒTdI*$YeI=C?#OE132zb03Jt./!}xzzCӽIQFNӻ. $:i[A5,GT&nu>7eJhfw,%yX[T=t|M U!'@K$Аvfgę!JU s,P[ ɚ|Uw/nD32A毶KrN5`h-6#,s-xF%g .T8EaǯnHٸOjOnO^]}nL%z{B>h>찇y V4%.D1B-a'N JkBzcW6&FQ)he3#s!*jcAwxpV[8QȄQ]P`Hh>zm:5uNm+tu.>p?^zz2m]Z=|ܮxϾ5٫no?/D+Mщb~ǻ&kkWE;^\ل9'-g:"e<'>zJPHcST0D{8"݂9$n'9ʄZ8G-xl.2Of  oT!"E DEFɶ"(:X<{G8o޲;Xc@2H{eL!8KIVʭ 'ܠi24zGg"&r7j H Z&0F=Xˆ-i R\3tP,aWAI/$RSy0!eY%Aܜ{x/8csYhn0[ja}Bk./8퍾8LSs ɍ̤ٿzR)מʜe{qz+d^jh )(*"n٥Sdl2I,LQݫkI"G'oĺDYtPͻp@\^^d)i'|t~C#!]ƖI5J8ןʙݻ"L>uw*M{4C)th-ꊫo|RԻLUެx^fďr?fw~,o^U>,51Dyg0 Wf1K E(jßoezWv8A~4Dhl ap̲|D 8sT:2ʐa4=YٿO7=@cln~ȦVI:ɛ HnXm4 _Wf1T/4Un| _xÇCoNj?|L_|ˇ?G4 ]dz:xMꦹͧ|vYCJ纵z?~{? O|YP#ɻ?yi5*gҕa+mui-:I٫JD9C<4 %G.md/s^lNzSV'z&o~r_4m)cH QGOj@q~l'W?F7OJoۨ<9*I&%#I 'FT@`\^lb:#\PWHh QK$h H9#@*%(:2K#`MUٷۄjeY1ۨ0ҚL[!;/,;<]Q&QDXO݊<̳] btCg5:PMZR+ ЮB\NS X\!mH Hݩm$"*NYsαV+vvY'{ZM:1 u_h3;Ŝ&(&!",cRFT+u:QB<)Rg-ךROmZC_$с>ipT!KyRk!ʄtDZp$/;/mya}xrU^y7bVر#d \{)k_Z G1 8H$Gqf I SW4"t#G%G.BL1$B9e$I2'\2yMKID9^T8X.[%G{;wl'G8'GdR|r^Eo>(|Ҝf>!=o AEåDjGxbQGWJ)#Xrh\ 2Yo++1h)E^ 8X=+g 7 $7mU6X BY9` d(‰`I{f&ulG{#3f֢xeR1HL451ul4j^n#dk]R06W޲dB*Rqzvx_ڥ:뢻65^goN'>ۉ.?.NG85d`b[jvTJSW8Tg@Ŷ}՗7IG0*ԔJg 1s',og(J  ſmם^9V F{7I9E !1Űה*'S޶NVwy= p_:]zƁN/Wy 2s@.X6xM{(@J%)(F%$Ζum  bkg&=*G-TTFt` Q- !8-#ZE mgbLBQ#Րjk`騔^j`?iv 7odX g+HŸNztG@s]ǣO Sw^< i7;wc^آHlˋYU<"KbQ"2`[b%Ȉ%\sfA0ϯ ?}mȒe/aM]t_@P@,[6tU]wr02Ti2uCQ loE@=FxH-0rߢ}FM~ W]"ڣ\|̖ٮǑWN7 -:_3O9=E4/?xS6Nʦ ^pHgJO:0QZοuK&4ܛI_VT0ï3v0 "I{(_~%Agh=7n*W&O{'~|. &t_V;AXR$Nl|ٻ*]rʩU8]0W V[h{Q)^v!+ui`[ʪNfcf4KVޡx+fv9ރgSMjzLeזz󗷯^h?6$Qs-9˝R 3B)W +Q,p /ڔb?៿,|Nڥ-X`Ri~|K7^_޾8}!!X.XJpM%B; D/~z@LhS[ i+ֲH$ca*x2$"8%!@c <`D8ŎplImq5 jD1Tb$K|v"TGl7!NEm `Y,:l QpП.ZxmNǑp,!s*232U+)htrEa9C:%1f"714f#V-$S@B#0NJD BDy<+ j 7gv ʱ.jD\$Rr7*HJwT SaV ]T̉|DH5tΌtJPDB%o,IJ&D3)z \21hwl#s8KiHdMN+=ߍՊc$@L ĠtR|&L48-IT{$Wow!ØV='K0Syz9c+Tn.*{qԸ5U0QXY0r\@oJIm;8l_|c|mE҂{/jpߙ;ax yG.c:R,2 @K0\ $#+ XM*X@!2cL4X%yڕϫTU*|Fb>դ0UX`e QIӞa,F T^'tQ/rQm"hۍgIT &NjJC(Oc*7&X`C(Ꭺ';>KN8| V("QzQH(t͞ZaŔ C `xBTN$vvO`˩;~'+;yފtλw*{pH͛541BXa/ѥMiw4'B^1Nաʏ*W5W2:?Lc`b- '1bePz]cY&&VQt3f4j|sWL#,8ZI}vY W}1ze|X_OM4| l(A=RO{ Zu:S\-ӫ ,RL)lE֞q褫tf6I5 ˇ,۱zl~O:PF(0G ֒TxاZyNTQOe.DjC83ȅ4*T0u4H&$ZBXGm;[,u2dL+"R: $H[XA`Ir,aK9#ڐMN:I$mvڨz;BE5[}_l'x':٧Zz)/~}d[8r6WHCsKE*?ͻ`',S~AC+kB ,RRPrK":Q҈)|̄&Ra2Tz Kp3"اT#LOwd 1Y,-,xGI4 ֦IN Ck>G n}4V[)yNUcr+l^/CJ«lATvdj(5*S*P("z{ ^K"9[́b-12x2K(v)a|7Op,sXԮLSӅ(? !un(a~"=|g‡uew|60I˯MCN^ Lp^mZкXj7wt:k{v35e-ͻD;=o*_֠煖!]Not|0%:qur_U'bkN?m{^/yLBn X7)?ޤx?znWx\^;d95 }?Wyx~JB|<깤F 83Asϡ#rj}Z}u<}|4vdn3꿾*&%.ڦ;ǗbgQ`tY'7-r |' yY10w`rx)D ZlͭQ,xN.C. I̧ϺzĂ # |c;ljXj$SmR2m>}IdmqIFwUHII˭lwaa7-uRMk>!MŒ /<|,oYnrٞW:e%5҄{R7!P!ΠoWČK6yۆaz.;Bc&:C⭳2fӤMrd(jܝ8Ox[8iao;瑙tRqgk {5G ~L7}"< |q8o>-=/;2/fCVr$hS[ LIKT#&܄H0 "VS!a &LI ViǬQFYJy>%ykY)ZNhP"ha?8ͤr0@f.VN7~ Aӣf\ZZS݀u$-IG eLy.=QĜHN2-2_u wq*ҴPTgYs DJNcR  QiƝVc  B' LlOv< {iF'Vڄ! G HuR༷i,(05JP^qp)GWc'auX ֞0W1L F!Tᕗ$"0"'uԮ X=z3{k-/jŇ7/\C63w\ єM˭"G^9l(B $!V{`ָ( ڸiڃg)2}R=E?_L;㚭EcL1`@ڜB!BKA)ČoOPW'%P(! X'ɢr20x[ HϝEa0IX|Qסg&/Np]BTTʾVWhӗ뗋@H̤NWN\V0uK#gKO7`+.RJliMUlb_Ń77ld4Rj6+7EI[](Vv3)rr0qa!T3Ƽv$׏tU7 aaɒ)1{Wɦ1V;*AG.nԺ!|t2>\f +fiG%>ٰ<}ǠSv*',; Ǚ]_;_?{gg}8D_P`)蝧&@3o~вaTCS+7qsW8=7Q1t~f$C@WRW` WN|MYcuV lV*Dx~N\/}e4q3gÙ*fKp*ƘaNiRӓ)}y1]% AF=iR.>v~dO߷SghX l@*Eΰ'4ɨPLzpO@hES,t#;2y!q"6S`OXkE$h/SM WGOJR"%0DZ:\z(PV1'S2IoJĔNB!cpyE~H\}R Q 4.QX GqRDdⅤSF)qG|LJ6"cQ Ar+KGJ, )1}նroDpl9 5BOEȮ' +-!F2r+$`:LX긥\I;$bH#뭶U=ʩ5~xTv%8;Vo37K9 HmDBMH&GJ6B+~TIU7G }4odô XgkH¿āߊف,=oYva++`',$ϡ99scc I`䜔1 @O5ZkKuaJ*A{(r ?Ck]eό?>z.{hCl!Yw8ɭ.{uK?]LǾxYW]1Zϟ iq6> KH ƫ]~zGz 㸞0﷯k{?46R ?ka[kL F_EwQ1?\: ᗹz4-yY8]ZnCׂޕbGЎ_w7YF#?go2׽4^bvNL?-7zDoMh85O3fiG+!qc1jX- 1lS?u٦~M%N%( H^K1I(Sc{E&&tN8 ,;potVc j UBOkyf-Ϭ噵7xh&~)&(AsMb5Ihx0f޾ͥΐϷ^g߷lNvz4H}7VRIj5o@ XlCcx cT! ݉1 Hw<,>!R]t?Or!ČR&(AEE_yc12$' Uj"h{pL:^Ϣ>n&A,:;$ \b=EGGFW!4ݏ7KǃWA1}*?5*\!Dhm 8$i ?O~FObޛThG !d2'D H,^YY& %p) bTmAL&ʌdXGik1glfv ϯGfx6xY7fU3t~W>4fI7F*q!EJjX'kIyNNyO0ohݜ/K !HL451sD Dy &dn,u\ ),rɧ*s!7|>dZHF'aOƫ=Q"ܠM{B"Z䊆2 DHG4ςO!Z橏Pok T:RD¡rt[ejG_OtHJ.o0}mאOamn299oAƓǛ Ƿf+?>E_yxU !9Gr :$NZBd 6X2s /_+Q#6t"lIm'eɟ^~9oT0<0}αp09^B *!'$&N>&.aj}AT;-( FgJAQ2Hdi/עBxQY[B:VuєrnKIRQ5D%U б5UMvHM{vwB;[IjMT~'.͋i=lqd Ѓqr4B; 7B\LJʊcYqQ#yZD)Ir,tQ){b+}'+"(eIL\ 1RkbxI9]*iV.Zt>w"8pCetJ9E,[' ZԔkpx{*}yx6 5Xʡq/ usHK,ge\1 5ݕ1 S-CKqyp5k~cv%_|M:m"HG*CwKp9O nUKfoV9ө[nZv ^m']›,zZ9NPfN:bLA.wggE\N RxmGwGSX~{6ԓ9/ ǯ{*%()ZB|J&$=l'%[P/WK93|[`ϓA$Q8(΍*D q7%&-$/UAB+(\AܢH*4DhFC!j ?S.?aȪBֹ&"JH0h}㛜u“[_?fpo3|GW )!&mI`{ό gnv˭g^B rC!Nk[~gz_%3e~i fL"bW[kYr[;{XdYUlʒ}hE4c+h]?j][w9ntѦ߈ mjB ZwoylKlWZC:}nw=gkan=ur,fm 괷tm^m?Es+|s!0|[.Hw0^LAzҟ?8!Pbp\yQymXŐNϙ g) 4(F%%{'OpJ&WFhryZfO^L\nvc`]63E>+x38h ogm~zXkSt' H]MJ8ܜ$nF' uߜwҧ0;]\6VC Zlvw(^TSp ,,<0<\X e``,?VL"ʡv*6TjnyhOe7>L604zPn.wJZtϣ%8+gN>baEPحga WB>VpL;p{7|h Yg6 <<2:h'EgNXlHAӹ4̤Gˎ4ƶe'$rX'zz#oNo_xsۛocӶ{=ιḮ#jdc7;ŴfUǽ۠6y SupJ? A+ L.՞ae1zK2;?ɪvH":aMDJD'u骳o&O$gͮG>긌d !+b )RIKT%&܄H0( "&8Q؄ A2 ǨU1kQRk Jzqy6 9?| (wf2tVXy{n7s9[`X'9rw N G\0(r+̔gẺpo$0 ݧLk̢Dߢ*8pHitL*"} Ҍ;$ !$۝ܾx*] _NPIVql"HuR༷i,(05JpQ,Rpli-մkOv+ʘAI#J$UDx%kd ジ@zN3g(I8=)QLՈ ?#:Um{?yH;xs)"".p`')610kuj05.]LOC.;u)|=?l(b֜z* +pRp  [tv)33;r6&xsFjk+GE-6`M@ZX.*l. 2tM$Ea+bVf^Q 7]5`8vN08oɞt$ڵT똗fe/+)9IX 9Oa v-o(tZb{0{F) :temdn@ffYv*xeP1Qc!һjj'mK6v[̤msq M] #r˭N\^^PW/s`,"CZc\@>wUk / ^ F>}dW60%LoQOs$#`iEcLh GNr)mIǵNH``2xKE^K1aV@`&@#h`W[JrIrjiëŲZlQ0A[=wȏ@)%uFD)Ѹ$iS3U YmUWR s3XE{Ejq Cx-d&=dQ g-ǜ9Mi[[)\"'|D'++\ԄޭLZ:x:,d/(yu2^*sg\Szc˄$ڡ*k+.Mc n9_/6!]%MD, lr" psR.焸K uJApcG=sB3*Pew٬̀Dܜ5Ƅ}wVug7[B!m1P~m'RH#0Fϊ?lLg 4j񌂔J`ʁ•b*А-,Tv*񺜋U28qw\?ᕀR`kYlfc34fMU(I`TɇjI+C9O)hZM*tJ*i\ae6BL-R#:gb`OgɱRN%*U6-Zwvk8q)7%.VZ1Ŭ<渢E',1hwWC~,RT y d!@2r"% ޓ' h$VK dl!L33KO84>7G0K(Ԉ9a cRұ,|୍(bAbT3wʞr%[ _u_TF2AWާ4j/ L^q;;]^6> 7pcK\ez./L׳>Pip nKZgFgKN W/!JWq{ubtr;z{2nxx8G;)!O~wh:kQB0hw 1a$o?{ܚ2-hPMd4H Cb%WF]U2'01dZ⥷-%4,:F\TYuH/]R5)Z)CbIgYF u:g&!hRhc×ˍ ˛} _.;C|چnsAi[E{xqOq5T'X5DpsFa]LtX3)]^{(Ho]<ƫ8؀޾@oصuŵuq0_غ⪳mHTHKqq?{7CQoя>/_rǠ(h!~II4z7Kl L؞r`@{˘ GS5/P|כoo9wm!|b$uhGe|7.|}oW{ߙލso׸/?=ldYF2/z;?ͭ߫~Z=-+roxDj7F̅7;߈)o?V?F?ҟl d;P8?̎Gn>ˮdNzo޽ 8%"'`'z TRkNo{+nl 1+]Y&iN e, X Y Z`QR1fě2{`<0R!sdt\kuSlt) Ō%zrp)T[m=)ĘKj% q0RVB$s6G%L*Σ1P+Sρ&ڐG$P+^sFFU˒6. Ĕ<5J4Z f-7focmm,[9c0Uc6kc6kc6kc^ ԓt?'qTpm1\õ-5kc6_ iIzOhZV>OhZY>;T'[;YPro? e~ӽ<h-_ 瓮/~5y"A> 歖Ḍc<'c a|0Đja6 2x1Zȑ\8+Ҟ'6E[gidZ,rCRk8%i9͙ …,- _o!08RCp Ã)wMUCT[NLRdFbr` OQ$WZ8 l+5tl"zӴiMӎiaGPe4}LΨ"ϛWmh$YNr":hYG~J+JRF'*C 9A(FߒAGC(>'Ȃ {͊{vR`(9kAi`"9 uBU]g;BSŒ FJ:2L{&! xv\}Zwv#)љU]qbcO"8߳1iAՄ|/uB)T^d+cSD9"8KJzUw^b2MpÆjh1aL>Hm͉CA#A}ㅔN2VS54^|;h}]Psb\, BqZ*aG1.kP€Q(cU|x4$zk8d]}+H/ݳ`ùqv;kQI;r ~r3\0N[lݍ&o9v?[l ۦ;7}y=r<қﷸ=(s~7]le%%/N+1 -7@yɰZv$[]bQ~M1bN1"릡) +PӉodc zn]zK ~wwaN}{ 0|4+wNVnm Yb[2c1"@-3A)>rcu{Yaܰ#t![+̔Ksڂra\ ssVd&&sf`qeOOPiYN@' }=} nw|p5s)1iq@xcxg !Kѿ-ugϓjK9?R\>uKRz]+dt7,.GlR.Wlwmx-s>ߡ]r|(NFMV*AP. <F$רc@gub;t! >|EwR2w͠bqP1Z@Q6 U> f͘FH!)!hPd΢sON OңhdʪN?p,kDB"hkS;j] iv2hpaٲ<5h|04i)״")9 L(n,*hJ˃N( R"=gӽ;|]Yo#G+yvJGՀ=lbl/nyDJtSV,J(*V^q|!~]oφ، >mjO8)h? 'aZȚ+^9Y3>h&q$I|X*0=`]u,tL}{uݓ >4jo`[2s Ncw`\=?QMg$pŽV3$(]>2#s:(x}wު&d;w`Pfa7#<Ys;A%ٯ-IoߋK-'&Іq%^)jr8g6O'+Ky9(̇˜O7Njuydvn]Ǭ]uppp8[vQD6dm4;q50%Ɩ@]Ѧff:Lp4d`ŢǣrmUnnu9MnƞuZ,p#aE$WLI~x=ތWԮ՗j_/淛88>$r/?ݻoW~o~~׻CZ436v= yP<Ww?ޡiSijoٴEӒb·iWvs粯3hq[;CoF4ܳe]S6ε| v&-B&6?z˽$(WG!0&we7.17Hvv/cq geuay\ѯo@ vu7m}!+2STTdltO_my,A@f!Y2UVd4HڂId0h,A{ȓ{GX.zHvTbLHp,Bp*DxFnQxE>3Reڗ~&NFF<XP4}wVlيjui% [=K$^*7/ʆN*u0Jdȭ% Xɼo}4( ?W~*߹rdH"9,#$g-KH{\r.s=&fSӛȃZ#c<@FO184I1[ J8Zw|$^Iu.4;T:T*"-ސ^o]@Bb"sT:|8% peI%Y.L1i4E)ё cN#ל d[;0WU;"CwUk~nu',b"S2yx,zc ג**w[Gl4%5Tf=dEEϹ82>/{R6)rp. l s֊FɭAg.*Gd#3/R΂a칭GVPK&mZ0_5/̷L{p]bM LVY p2^&<s! dtVr$Go}c8>>F!];_ z2-5E}/GU W HM)9Cqdz4m'- GBJ2Ct< mA,# ۇڧOk,֨h Y d5*ȑpIL&#gYq΄A C+Ϥu*QwX*ExY*q2gq. q yZCц\zJ܎.K]g!8E1/.!Kr{@ cj}fEk' +PϻqLƳyFe6ÓYw@>OGM0H ~Qe:oN.CY'2smv^GnREYmAovʺ0_ As0VɷL6w\4qrrKircY߾ hύyN&Zh(?Fj&<^v"AniF"8]$g:X؍ B;+|\ԡy^!i]+D~_ :Ƨ qR~P`ketũHú@j ꖝ55z!΍T,p R蘎S]~vZrڃQTT*#Xyr 6b)"`u`rU!QQb*&֖iTq%HH<3WeJ:g̩YY) 7 yxQ[iC7]Azi[Gy[h=sJ 38k?''jcԅˬRTnO9Sw ]$շ^<ƫ8ޡmkkp8yaue#AZ))XuFʂR싨4,ZMu Z֋[.jRQ}FJS8W)yH9$4U)k(a݀r$t{AB8vCɓi[κL 䈬H*!3AL;:7Z8,jRB&hj cnmBrd{٣[kѲrZ96q ,'$vʍ!{0W]_;doqOPr G* l upR!8\d3˜; "/wӡgn.MVϥIrɻ+^>%"oJT 0k՝-KLd4EN^J/YN^rH/yo34uq &CxT*RL&YR4G,[ÎxFK/^aYic:9%b,jFM.J,ՁvXcǜiSY]ֆd@H!h5jn$ 0c=scrڋG`U Ϧ3 mH:,^w.lHU•}5D/_\_?( T ]V+rCS3AgSWh+NIƕ9'ݚSӺ3izɛwW̓yb zF̹~tܬF෫˺#Q* a1Qy%Ӆߌ;].QY>tF]4W&82ĩ/Qw5F:yGvig89ǰS=w*'57)?=|m?;~2}|ѻhu~aQv`&`G Ǜ514'mκYuemNyǘW=?q׃79z_CQq/;]Y]B'L b~>(^o.U*jчX/\r>&1i:t>[vaJJש;pz8uF_-@} v1t QGOhjp~dǧËN,OJE pH@ u40nfW)h(zQGdWJ)#X|h\eR8fJ޺"oCf6uа<|Eoؘ]7Vx۫sCRC딇fGhy׹3="ۧ->[W˻Wt:kr-_ͺClYai]zi%rZ.p幛7s^*X{K -?Qrn[~q˷9q2kJ4>;KVSfN)rrdF^䮀9Xӓ1atýբ-Vo.(f5N[,wIo'M.JFw >F ]|Z?N gx˵noт` tY WĥR&SG#Yɴ&bPe:'$^f 9'HLrA8V&]2F['Q xr/P'vi%T nqW9 :y0[Йԫͫ񠼀*W TuKI:E/:s[:^0=@"KpZY^W\Ya",'Zr2`BKi~fj elD'_)^Nzqe>a>;K/aU5:*!{+!ȴPjvVTJQ51pLRA ,2ܜ1KL]jZߌnS Z6: n( j) 3% )]8唗}TydrRGE !1ŐjJ0U ZŒkp֡x?9@K (C {L2饶wN=o {.Qeg+D>yM*і%eEr[%Ww& .rvM蹣q* qBh :P0FS/ hrRlbIC(2rrQa:bO8lQkxr9kAH0j/`ND}!bpf$L%ԦZINS9"m=NFQwEA :VM &hDJ`i%Fa&jj>Uo$˔NBC<H\|R У\0(V^%Ƕ&d ^+R#k>Q?PGd m42P!HҺxIJ#ikoGi*=kE6>ѲFߛb$`: OX긥\I;S!y\hOwqIǕQ8m! %(aY1yY*O'=q`ʽms!FjφԮɸ$! H$A)a*9fW$jhB.v/sa=76٤I^$ q3⩣FksNoH-sڟ~ tMA/P;vygCV\ ss.pι98\ ss.p;́i ιDι98\ ,m-pιMW(pι8\fs.pE( ss.pιh\ ss.pι98E8\ ss.pι98j?M)UK̂yޡ9obO-߇ ACdNqFd *M9fZR[ub Xyl缳tHr#>&DTвP%TިLؐ*\xK 25ԥ}uY.2&8~]@ZSӊJ#5 NI/cW|6&F(h0zy.4JQ#e ƃ^E!2aw)zm]8n!(' >˭GA ;i!=ӑ!AyN0t(Wt =ؔDE*B #;*ftIp9^_sa; uZ&2x.2Of ?)MQ6 rMB!J.(mcP㏃LQo"ygºsu"1ч,16:$YM(+TAdhhDЮN|=۟4%H QO0@K`ڠ) :(0̫Ȁ d*$EM[Y?vn9cnDe](юd) wDNeg@gSE$S?ˍ0/?{0Ο_/oL{O /使G0yΝ`2ѴwK5ys0(9kx%*n웪y7[K1>J>yDg#1 ar ތp f8sQ\f)fahC Ŀꝺxݹ"iV WZ|y;<( T#~V8f(%Φ6V WP:ԴLmFޮ}]ip9{Csn;9-77k;Q6*+#Q* a1Qy%Ӆߌ;].QY>tF]4W&8hĩ/Ñ]_QN?]N<1T}JM: &;wqGo߼{Νޡ=+ F8v~:=[2ݟtn5`X$+m[.;>qqW+Krիxzb Clօ.qaJc#]D/ci#oZrq F681} X"RdN%2R/~\?_ӣR;o۩d 2! Mf#Aj &{ŒКՑ'_Y۸Bq E23#y J%7"I$g4- 1(brgFLoPU6*^ڜش1T.c V>d+WЅju l|[9x tʆޮ)w+! ;  Xo}4( E c߉֋ڼ_[cCGσ8=:Rv=yPaV IviGe.:8jyQ@RA`@F*ls\aƠ%E[n/$ x<.Ơb>eZrT%lB>5O_UlV7 o2$60Z9ŌW4R^A'*oM>4" JN@K.7 nwp. ^57檬ߐܦ"Rd^Z2u!tAe)@axJIQ_R7db@P.e3pJCD`8}Vf &jk ǘ/1mQJtmp9An#vRk#9пt!6kuv(Wt6QQl`/y.9'tzzS9mQ,^XQJj1M ^02y @&Yr2kY8{$P$}L,d$hKN3%$cMdnS,`5Jn :wQ9ѪrFRRh8MdLڐEc\6"46[Dc(wUl8a<&j0-,Ę0!9g.Fm"Ǡ̼&6 Gd Y.qZ&5E-=`/3x2LZ ar6%FfMJ1A>Ыwq}nL05U jœ6o*!JAC!1DUOC: I'2"aSUVJT$Tb#;yj "3WKLG&FR&%-!yr cz+"ԷdW[E϶EMMOUe¿;%V dP<'Az!$l0 SoQO܎$PQb*&֖$wep L 2d1t o& @\ցV}<~HNǩGe}脮+3لht4Ƚ^^zwɧd`ͩ3N\qK3)frщ tI8,M/;%Dk|w|ݫ)6*ŰP7LPOm 7!>UˆyWR{x}INԈ̅ˬRL\)iXV}NQ^8Vzsh9лvU.zx|uk^^jlz&圐ӄUy!e~\~?**@*ͽp1N{݁V" A h 7 E@}ERöV)Y8*%eʙ aN1\sDA,䶵Zek4twACղRU:oH:2ٞ^=fXLTi=GWG1%3}'v\t3W~V+闳W>rFb:z?f-{S[~trQ%42!0:g)rivJ P'K};x/PQZd/ 6Gg ^(OV#ވ]7jOPō9>#je6XS.C>Rr&s9x'=9?k3at`R+Ig`Oǯ?$beaLpY9lSFb$\p RRQŵ҄556׭:5 EvU>+^>%"HnJZ \1ku϶*_yauU#[䓜k(hݜu.}k8+\ûNe{:[NBWϿt6U{B!193Rh9(lRA$%?'!QVkjFmMN)Jmmʒ R=mʤH.y1LJf֝5(yReךB*BtUʳ2FUggC]? :?O?f'@cW\2f> &CZP* L&OYR2G,['^&!aYicC8%,jFM(J,veXW$crƴ){T p \)-F͍D#qFz!5dl`CIZ&i)i/ZA(ʝh l"G{ubo[T~iJDWA?hN6{3{$|ؼVJe1y1y1y1y1y1y:_c4&Oc4&OӅӘ<@lL&cM1y5IkLiLyLE:j-QO-:rAYRyV `GgrpvP |REAWo ٨ʵigbc#sm]ed( k.rHh dC!1 NFI!Je2{D.K `]୷AJ:Pΐ8c *o U-yE9Ѕڼ9-=9־R%H_7Q.w|< wd|WF,M'tSL,b^+6)Cx6gD+aH8.8#HBu:01.)e~gNCa9gc9|TtR'A@-U'of<&NJHIXs ]?<IEBd20F=s ~$3/\:KMԊ *zq6U-Yxf3ZiK4 Wa!1v@`]rObu:.e˘)2yhٝ g730 4`.sdHcVFBX?va8VȞ)'jM i"y4HʀPmqXHu֑& 颅1uTp̽HԟxdR;$X4mv(SZ %TKIN|f3eLҢcA|5];^-M8`={[)JvL,? F@ .Es[m󢍝jv{jfhY'>ȨQ%:14&Dml681}rΛBus[쫖푇;8q?[ w~K=:::RTH&lɓOG_,A %*G wD15=8qO hVK_Ps"3Ai 2zP'TUp4F!2,b02MpA .S!IH10xepTm8σ#Q|Eû~@fݰ s\2x}Иf/妊:YTdLԀ'ZkpR (4laEz&P @y͑d9<ㅔN2VS52^w#xWmR_sAIQ)rƲHg +#ԨiOQ.kP€Q(ZyQ.aΤ>~y:æi,ذtJͻ~k6Ը56y07t79%}@mt]/j|E[7ќMtSwq{[xjׯE^ zHNk- zQAR6xHe-?&uw m:·T~g/}3nM#npj# ipnVۻJZ&1.p9>^`pڏD qJDֽ+P.[(^ӫ}ׅKr9:=-w\G>7daR\w|sC`R~%@]ȝxǁ{Q*5 ) f<,*w3"1 8Vni^uA-p$ϯof֫#%p`9ڍ~kI/'%[%^MO7ķe{O 9Ym*|7A6K+=[CZy3+^xtylwOOPfO=a04BWɑ ' _1M4g`Zq7^5N E"6l>pV +I;u҂.Nz-Z Ci&8rIVKe>^rE7ڮ:姳ףv9֮4y"*jsz{s~9- `я>ͷ4k|V{?9M9mU0L΋l2L{&q==#=@ -9tljM,!0 N -/&c}R1?;@LKsRj J9B$תf篑N+ 6pnG ^ 'h6p6hf߹'S /|OmDv}BF<χZ2LKCX}'s$@x-\rQW2첦. )r+\LpVb&r @tZ*KR2' j)e 0s>ڬ*zXmاXuJ2W EJ+)umwVMFE8 )#TeaPϤ+PhKVŸ:(5K!*WIgL"YZ5Q3 X_<ڥF/-!~_u*#N#/գ @*pmA eX|fV^BPRh3E!K$g{ʭFk؏F tv$\زJ6R/(e)BfFYIcF%Nxli*!;R91%UFq rTY&tIuVTjِκWV59 f>Iؓ@Mt2 Ͽ XT)5)P*Vvނ6wʦCjP+\v^O:.~k/2z1* LsD~iB(P|`JIQJd.I3*S.Ė..m;AFc\D/<ȐP_ IOFQF:F%[:H:s4YDplDJ n+WlK|2ɖ6 PstYr<0T^L8c |x ,S5q=y#9uT;!RLk4N` ru}lB&ߺ^kD9$.}q3T+YlS݌'7uB%Kʳ n(QK*; ӫ@=> Gm2o1s;Ϭ~ȅd Pk$!%9eT͒1u܆i2""]y/=u/tAX"٬̀Dܜ5Ƅ̽kK]xps\ _\9o(t>R(qp`v=9 v*nYV@nחO.!''N@H@1d9DN`Hew6Y'4M6!)C y  c |4DcYo[5QkT3wZeUz`aB?eH//gi!:elOb 8K +2d15֠o"7|lXӽux11\a^*co霽,].k>mfGFWjZ8M7 FR RE&ogLI_Zy^/:׀\R5)Z)Cb*gYF i:g&!hRhcvY+O@\IWn(OӟKaɃϻe[DyHY^Rzz0$gxI,{xzzZ6z+]o_[rk¶GZGBZՑVN#%.rvWT9nCDzM@IIώP干A>ArmDusLrN$PxjI1:0)*tUP1x)4srV[ 94y߀ZT5&;_E<{ݷFv_}J߇ࣇGZJJ4\8M \->?/ lru@BN 笼Ѥw_T`EGZo Z,vrVs{02&Y[ [h=_jYRaht=_tWjD{28E|)8eO>^-7~As܄o}t{i! SՕ+pmѥѐ|O*2G%DJ uxYث2*ӫ2UeR@vC4QRIL)V@1h`bحRaJ0!eeVWqHqpƹLmeUp6+Gr=}FDK`7%lVAW!w{s6mQ)~[w-˘B$S";Gc`<4P-&Ef:%v .0X7t.X&sYcZ\LnrrLk#Q!Q@kB6)Ix<U)柽Z-k,+t:FVhR\lɓOG_ALY)O U@r{*Q^H*k:tu>A`%B% Psֈ>u2(l0 W;RyxԱm5 Vޑ21 ׬9߫Cиn/MQ'+B5\<&285I<!\W*N -M8ig4HG }#7FoDxY )$!$11^e!kdߌ7cs<G ."!!(jxPQӞ]֠REGYPNɥdoM޵#"FoL2ؙd 60ےV$}+CmԖ۶tı"z?V80u8@S Faa'siy2VoTl]UG9])?WxyPzWRgoߢVfvOוnd(lmI|/bx8uBO Ssb!.[ֻtj`,:ĖCl9}N;h-zhVs݇9y%^.lVx77f#5M&o~xS_Lɧ8,Cq@g|oHjxG_uyrCN )H#Y3|>Hމ8F%YV0v0g8 n0q+<DϗCsu V 5`4́_O/Fܷbas"څkIoU`6tY/7<r |/ y'OsyY!c-lU 4^.B. - f/'"^B3 V/O,덡^&MmWf'}Dˤv6~DvdmqYFwrǤnm0\ )Qc YK\#t[+yĽ,suذGi;SdLR^!W 3 ۄ 2/٤Bfxi[.;ݺJ:хw{_z_WJ\݆M*~+ oG'~rPxx{hA P8p(+׊ȼD+A)#T*{#m- % _7*93TI!uڿe[ # ctKw}QxUPrwhae#[ aa٧'N H#5 NI/1+acFAdzD:τB)1$0p8sxF1eTe CwV|~DN7onM2%#?DZT\|S^q 5ۇؑ &?u} Znf^]u�S+iE!yhQd"KrQw_y,*{͊}7 لU>aN)E=a^}c$ 0kB4$;=AcsSV$JЌsK(Q\KNA!RLStP4aWr"'{ktOA:XwxX `^6gq0q+lpqB c?0pe(Z1{yG,.ϹN,8IN"70Z*,R?)_(ym1?nyk}O@c074DLə'v>Wd@%xu/ͰzVԒDO8l$F: 8: W0bub8oݣ֓liT*KfL?"YHDG1`Α$2%Fr&5W .f ȓyXq-K$xO9#dv舧>dAS&mFPB}x .+T۰=(VvCnM:C=yV*a xm_`(+C!V.9OCc3_ۡ\p-b@[}2[/ NdO4zZ͸orVfP҂V.ZtY˴ 1-1AX'-z)HOꌜ[ )Q.{ 7jPU٪>5~>_uwkso mn4wLޑ VQ4/ii !.-DmS\X<(a _2HOf!?5v f_xQEM^ ?yo ?5^Mf! cwT,>zT_\ľY,մ`[dRw] څnn@Q;)͐wCGQ_` Rxsc1RGTZut#7*ј*DI\ 4og7rBqu ,VNy erRGƢqQ;CTQ@9R1w:jG]?::t'`P/TQ^=1rlȕ⽎6='R3ܑ /4( ~8łI%ڒYH@q$wGg`4Z$<Gsf&5Ġphi ihv/ hrRlJC9Q\sjJf4k:EFaYg䴔C`ԜXP4ND}!bpfCtO2jSg$TxFB/]#Fn"k/U M=*I!Wo4w:sԿߎGN`чQYyđ0C*G)PDab8:QG'(^ ( 9.|x'EDFF#c d[Z@, )Q}/m mĝs㋞u" X2-d \%8`rjAտ91LcP&/l#5Q`7JW+9J?krF z\-2x.nhBt'7Cwl6+%֔i֗('\ztYF-F֌.> yCwT͠s#j]m6ݪp+=(g37-z@F<5BUP.Grܴ,K'B8k~/C{vKmZ=/l׻V:HW~.3R1*tk`@:%#.cIj ҃eiU8>w񺺼3ΐ?D}w`tnsx_c}j4KM.& IãNN!\g`4($w D0P!F)Cutd$|c"X0a1:& H:O9)bb ')l_+عZn4Z9jc\2rJR5d4RK1ߗj9FRGTIxID$91'%FD6ADb6|?Df10iJGǁ{I&yg-*(?{ {8D>Wa243w)D!b6[A B-*0%͔G|(UِVɛ,W$')1eКh z {V~Ҿ^ZoNhS9)8rZ8+v/#ß9u干QJ^o6Y<(]mLJ@=G"o!&Mg)x Ve_P0K} kyǂ['/(hS4ךR|f$OlCKA]EĢ<֩]{9 j𸆜Dd$ʴdSx (2ԘN9%sgY֑J,Aӕ h r!xR!u m&r*GxmBV58 elxFc4 O×Ud 2inuG-8 V"YԏD?/A_ż lbZ1pߩec v=ݮKDMrnͻf^$,RL'J s 0+itzj+Ҷk4ZI0xnAmJMKK[oU4Ľ"ZF76@*tA6 |+~M-)K m ڪmryX砝).oNN6-kW13IVJ#[!ntn]''40z63 [LSIlf%`ѶaXSQךBr$RVO ]L8aϗr(̈=Xe5p7LJxKd]ж+9.O3U-HC'%\IbL7( Ш3ES'2(c [ƮXD ?va+Bq&SK \sD`~?t7/x`\8iB)E#Hmr3GR n_'ڷzAdJt AOQy)I&^X(~:c_V^Rx/DViw_h/}W9j_s_ ^0c C7N|wgFj4:z͈ ƙ[wKѭ=X'ZeWV ڭw1zswޛQ [^F޳F?m/.7[no]OFOo "zg iYT@m1/׭ R$혯JK\>ۦ^Oin:=WYj_{3έlw'Anl1"hkY"hAE,f4Y"hAE,f4Y"hAE,f4Y"hAE,f4Y"hAE,f4Y"hAE,~"hKI]$6F kAe;g%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V+F<$% >v0יQZVJgt!Uzu*뗗V~ָ&KYi7}@X%穫wEf*!(HA4H~(GXx:?zdËR;oNW'_N_ڭ+{*Qwz~}/'ttKu@ Z]oݿ#}B h\u0ZpVǧ'+`-sԂDjZp]ܗ/MQL8Wඃ]$& |qvSW} ,6utԜ6V%bS@UWcRMXMmӽKk@tV_(_o@*N$g |9'}/҃m4''X`&]>T$ö-OGgT)3\T ;c3]OtBbJI\,Θ֘`?vEkgўH&z87e}c.fX<0^]il5jjt=Հ&Z F jɪdk o]/d#D;Y#s 'z":7E{]SsfD?ֻ)|i}~<}m,`sʨ.|/^ʫڼ듵7{>sITAэʼnk燷wv{E #1bu*GH1P_)_x&ǫg0m@Z['cDrA*I՝ ?~2|Q*%V#mr. mE 9(]EK*TDQe$@Vt'2~. ̹~,B #n|v ^:G\伿۫6P{JREx%*'UmTKIiepU #̹{Y<>;== /f{^qJ0Y.yTf!G [-9< FVvU-O2ު.Y Q1C% Tl[5@r@@r9wZboNWV}B`R­m}dBwZij㶄4/.Ouuv?x'(/|=].0[s-VDVI_ei[P0ۜMi2枋UT~\Q<}\f=MnC)Og2)t:T 3U: *dBhaz7&Q!)O |cZ8)Okg>D-:".0&ԁ k)M̫JlcIRM2cOdrg;{}sY/^%Eˁ{x9sZ `dn"^')uK |デ<&WvaD*DU ^S{+Y=X+ZTFIWt*F`L'L$֛1?^tj"}+ 3)^;{Fy5qo`U/Oʿ]_[2ԅ,cO}xsZ %h-/ tOϜ'o&"RKe;"{O<*߽ALNkV-ִLS3hLܵdF Fat^(pg<0Z18F_5^YDv6`^6isq>|ú\HYv S:>>}w4#5ι"PazJ) DM/1zUzlAwa :gM|qy{"Į Ftz&FW6e;-rd(xhԖcU:0 YmPR<@m8T(/\6҈5[$s؍VA <)۩ay7D5`ksP h2^'/M)# 4i<ӾL;&ߵi9;9Al`>e%mwm_e[܎+@>tl[ ^lZ >meɑh$ˏdydI6ĉ5yy>n@wWլ0<~{0zt,{:x#{CO.Mnz|(xa$g07*z@9H0#SɥPrJ3l!=3\Sf'Wyrԯ7FWs=q0BMuUMG"FøSsEx.烴94K-li$s֔|_^:lCor??ߚTْ7 turOW&UEV'2J@ X34\n& +v06kF"ȾLE׾@]kbU&XW\ "Zmxq\pzA UkwyAYvAmf3p[_z o_)wUFfe`:# 0LD"cA)Lrkg1 1K=/G>FEg'Wj6W(cެ?}_~x1{IQruX-ZGP՗3Ҝd  ++ ɓg>Nhq I&F JKHp1q ^ 1 ٱ;#\]B#BḽNޣ/ˠy0gߓEc`"~/_ c  oE3egg0PfR蘌qfN5$G_iap*(";NU*TN8&B̩MXN&m*khy:%* *7F+Bigg`qe5|5ծn|g;99ı?r೅|_,f54|1vey;?{{ÎF.ݞ=ZQ鞥 r \ ؏Nj* FZ&s*heelH#<P}V͍4Ϋ__ڲqe0q-C!-B2:/ցI%e'VIH)DeYI%@,K=6hŶUZwl8QXHAbF BAa㠢"h_yc12lLq.@N%eL$'3⩣Fkr9ĺ9KnHgEhAKQO]4߂Hp[O-ͻG11wdb*vb jyl4/]hIUBP[9M qt-eIPb~sOIP{ (! ^; xNs-*O|2Ahd)R}'x1q]6 &s~,h5Tik1&Y_GWߞ!VMHЍ|ɴMںUWOW6:Ԅ O֒);9Dqo$QQFe u$4N9*RVEKm`u4[mMr=)ZCJ$T1wFjz|<.RWֻߧk+g#`  އZ? g,ςY? D,YX? g,X{![Sy5 N$bgcS:H" ZkAk-V8ւZvHF -T­7ǣ -WZ/:24 iSy,DSKm3ߊηbt.?x6=o=p :pBbAGwPG `ZI$#no$dT߅PIolVC}$&z\NG4H6F9k=.'9|4o vn%0vB%LILC}9o<^mx۱U̯z 'h|ΒI1B>z%, "J&"cd!.9/ 'sP)c$pPր;mV=qn_-?yE+etya%Q8*|RE?vs7_ q|(WVDTu.X_`"*NY_ A͍ Xg}U4JY&W51RkbƤ\*iV.Zt>n5mYmYHTΣ0d k=ę!jN!oŪB>Aݟ֎ßgAO,~JXYG*K*XR\K*ScIe**XR/Kr !:IGsD/-g0Fŗ۱/"$raA)s\F$sPL^RRK3:r~Ɋê" *sÊǾ-c\Am !gU\TWɄ?=')Ń)N |ƞgX/cmGK%EP)Gs6)\ҩ =Z:k+>mب]7ZxQzMR&#|fChy׹S=9B2Ok?[W˻NVt:o܈լ;Ė]_e{ƫ;ocy0=]jx=uZng[vozIV|kΦ?knfo$7\yJD`V|Ri:G_ _fw>_jP/NBoM^r"edkw 1ͶzCˏ&~?[nkLY o{oڵ-KaseYV^/r%u{a yo0Fptpk;E9 4Y//kp=Y0g]8^ S<1S, y7S˖^W{uv*]EԃTjfs߭&pv>i.d\.AQ>I[!"6ˋb{i!Ӭ,,T'>\0܅'[&QG+en87l3mqkb̦Ijf)W- V9ΰMX?cJLI~[3nŃN҂DtakSkJ4/rs"ZƜ4qȔxhC!Zω9WaOk.p ̂|x;÷@HHtwH m> 7,nȈ!G̫5h86[ŲQglmL*}=I>ш3 >}r+rD5=N{۩=Ǵp'߿?)ӟ珿Zoѣb8j%<= ?CfCs k In|qYSn.s-j纵f ?~q~O|6ЫOӵ֜|df_AϦI7]T6U*jцh ϧq2tX#ݴ&N~fT5JGI~;uYNg.綃8F"<U-[c;90)gJ/۩<9*I&'#I|6]oGWr1#%89m_n.ń"+gHH")2g8S]_UWW%#K *{B<5EG=:2u#_1Bh%z$朑Zdv|0&LڜmBلګ xb %+L`öVާZ1zխAT˷z|+F8k:;i>5'q#^ֶk8+ub,q$NztGi~њ+u2.IFDQ@\I E3Y}rxHD_&Lp2e5ƀ/1 L$ fSGgq͏ bG3htH|r}kޔO=^l`b#A'>!ܝAIm_dڟ?~8 yL$^,}Q6<~W{U©G?vKs (nGnHnGR;cގqrP& G[*=jFv|gzN[ uz&?Mq" zw-IMiT mw=]Q7MԢj~:cCo0}tW2~g|5\_P}}V1jsj{oG+Wȍ՗hFh'ߏ҃RfQO BS0Z5@1TB){>LQoU`;#i`nQK[3%`Gs =*ޕS^jhAZ!A&'u<3C)U`Rq 3wP9}ˬCiHIEK̞EgOz=$=ט%uQ|6->reϴW9P`%T-IA @ g:-&4{2 6/Vi9418\hq $$eThxaPFJ 9iEFv*ǤBs 0293ܳtcTZJ9b)g_aӽQs7}Hr"*C 3TF' D,g*6uHBWu (yD|#d (5vhPDR0`p5yD$ RCå7 2M+ibV1}RwZ#S"8 R"< 1y9~HTDA*!\0SE^5gָ#<5J;>F*.)),H m9!>Dž+" (,|G5FV/r裯Ւ'ZP %QF6U|Z ׈(qv/P&&[e& Dhc-)sxCjCMT!oO׾Ur0N_FabL$ 2\n=j "hai)<1ԉ 9uM_~ ywan֡YegU&:YX.(ق!Ā6y}pYgQrGk,"OTu1lDW9nԋE2|9?͚YQ6D/T~[ c5J(Q}&yq._*}޻~v`8m-tKv&"Jz7N;f+f39;k>siF=4mM-jՠ9Y=4_n"?}VQ# 禣Jc䋞ۋ5܃_@h 7|^6)U oя#>:Ya&SvjEgҌ[#o/`/6.:9Yĕ3γ 70Ⱦq`mNc/Ul}A+7Sk2i뼽i%@v65,O@iJ1l+٦s9ނO ew}T9LrsJlOe*sp:/ =aN{_,6|UdSU2A CY ,̊hXQ/yUl袟lwn`K^k'̵>f9D7:ǵMA{f1ՄI}x-{wԠio8~/(MoJBm6TV(O/[; ,a^dIj,FB<_!3d0oY%"25\݉Ȁ6 ((Y83%fVfeyIyb.bs\=uQH, c\bMhƻXl>b0u$ P_\  6DL 4K͡B3=] %BPJi&+$_e{o/7ީpJ`5Ѵ*K١L-{tѩܱ\_nUOX<ΝiN jTDKO GFq{kL#jL*jN)=XrI 1z.ȷs-yW^m۰{ui ?_b} Ҧ7?:ħ<;0=v;\5Rk~D9QN(j-߳{^W..BS<30?߾Y/oUFbU.kJle,\ *$9 0Kv8L.LeV}ÙJx`X7qRWH`U&W<0Tε RWRqsG3 'D˹YyEj}s! ȏ&=O] ΢ϾA7iJ\[OU#SÏqV+$+3F=Xˆ-ic5Q\3tP,WEBn"Rl\>ͦ+1wz}Ϫ=s_aJ%դ0S&dZKf7ᅻ+2&cJSmEx }E\xeثm`3oe7gE#Ї]o4\8]L0n3sp3/$ M?~׫ s.kv~>imhk$>N;f5޶^{E3ty>nmc[~}ZEJjX'kIyNN\EF-Ӝ}ZxZnxDGsWZfE44tQˆw]KuRnvS[5۔]EyN.MQhc"1MP )K$)P5rL>8MqY(X)o&Apޠ7~a]\QID.}A-eJ/ *D$E 0DPB"i  2O}r\k҉"#gGhB(U;mKsQ&ɍti(Yz%c!Z$0DG_U/Hdwqḹ#r9Im^NY{\͡=ʫw1\ D E-b,\5.A88~%(.JB"mbQjBHΑ|]j UOkB2N AR-lB%UZ3#gf<.,BA BEyl㫼rbQ^~œP~߻?s-@J& "'EGؤN9$ob @CA4GxY5OӐ?1βRy NBP#$6L>&.\ aι[;6ڴ֞<] Zeld6!wsޕcEȗlG`Il`L`x˒[q/R,:,V9*|;{|JtۄcK R5m9C3͹)'CVT!E&ŀpDIFЩJpYN LY0vE"[JDVX"^"qJDk(!Jg,3HMJS`)}0(9ГfBs9%}B'J˒^twgLjvgy݆sDfT.˗OওU:ANӹF J[X9+l|B@D1mD]˨;;Ѳ޸wM 91(QD<'SncuiװVE1lđ4ͼ Ezai` 4tz~JhhRl:ed4 WKCibL"ϔ >iOȅV57{J|tMB|0̢i#QbE<$a\|05u189G"0i5ڈ>jKS$Z#gK]pqD#{%350>%4'"%aQ0㉵8M|iⅪ,6E]F8$(7L[dtaF'O $#jIoF;U0\J[<&_9EF"7q #̘' 9%(IH/mp:Z)yC<67&jo^u?룀\ Á{ ZӕZwB%!*)/JQI"UO*j0 Sg>&RPs.=*'+LGɌZe Ʀ|TQ=8M!Ʋ[OUaln .T|Z}Ro k=Hz5WVB>cNgkmpʱcQfOfx)^L0 IQV(4s*E ??^7ly|t{$żٶW)B|J'PZAAo7qU8Qhr Dsq9a7Τ3%s)z*Yʻt bl6P&N'\Q4r7ܿ12͆p>@gm@QJ!kf\RFI\]\= %=sF|ϼE޾yiX{dCܰbox9ط'f(oOuS"5:ztKZ(nQ(F|=S mJu6l솮-}R\KlNXT.Bν?!eo^uu5o{lC6ЯͺClYa6vnMo9}Z_ڻ;ϼ͚4tD-ivn=bi67nv *t奔ROtSхQ5gx`fw./5>Ӌ8eA^f~;Q^~_/dv7~gztXW=m1qQsyiަ/4;=Me7UYe=|Iy2C0ި{ Ց '[ 'ǹ{Y [l(Gsemyg3gs :3cLLWZV,mW U&RKCv{}{" i2.tnx|RKRVtbnx:X5e=C53Ux$4cy{Xfذ'~vO__g6 BVLV-0ΰM1נRfe z-qtm^;eM%e(a׮rJx^\7cEcBWOH!I[&$ JO8\zo%D QM%zy<\\w:bcue[*ffSe~f=Aw=~OP{š8G}<]Lw\uU)~,0R 0WxQ,T@ɞG? ^m;w"ȇ, Z y[)FX Y `#j1PJ /[69f#5&錧@}2@ Xg\F% O ('EmK ]ʅY|g1<7˷{1oM bm:qsj7 !NFLo^Rh45<2S; U4Xtg*r\v8~5bxN5nsnc}E8xKkiA6$akL5X>#$FQ)0Wc,WqqSj؊rTKT9Hn:p\ F 6%"L#s:@]sZ.fW1p<.,2{ܹPpڼi ʏh8S4 s3V4iPD@xB;bx!IrP(pRd4_{Yݫ]^{%m%,z62F #Z%D%"-  r*HBHeimbƶ(с<4\)1\ & ` (INM ̻pefyc0 wSB5y?!OZqyσ[dP5 aaX+)n63?:]<;{_B SXD\ oAn >Q^E0S>/fljn0H)\ ]-HY™}~Z7utTQd6qjChR"Ѧ%T!0=6#gjZs?_w>7_^4>,gc1 b8'znE?]fA>>pƋ dHʕ# yHgav0ˋ2bQ8y&GWDϮ /=+GedCuճ2<|> W!=|N{۩5yg Op9?~:wϿ??xOGp{:`% $(xw?=`h]jhМ|f·|eke_B O_ߏwټzЫwӵ֜Ar8Wld[;>颦﷩2((TX6ny7!(CN}>Mkapwm-S$%ԝfa<PoݝUcN34 `@Hb VFT\2X?u'!Hu;u'O1`==I2xBf=!Da #isU]tε rݖB@M˕#9:DWi#+m*K_ ڞ<5cMb6b,K7*e+A8 ߷6騤^Y0k]*;T[Jl? FN{HXB!Q"dSy& 3e렌F7BI%T`=o iTS=Z)ͩBRl6ֲ Ҷ)E2L 羜GllUsR|6q$8Br$WYk& 4<:(PHQEDΎ[sWU$MT&?BH<'B4 +5J7 85 -hQ>C.v\3Δ%2P#gJ2Zی ,N F'|V-{ =5 5W "K!DΙ A@+BE"3PzIS:c%YK?'XBNyFM0AI-MD)J㫷BYms؛ZLp1׽_۰FeJDG mHA'ґDA:E)Q "Sbk56-<5J{!!#cqaI[:4J)1r-g]abKdW擅=g *I&$kQGP:'*I)3gr0ƪ09 w[!r*5A/gY7:lTuEo U6$2$`MTgc(edjE2Ji0-mc WHeňL./E\ej캸TJՋW(wmI_&8aᵽqpI.]CE*ʶvqUσ$4FA$s]S*${W@pg/*оUUR^ \q0R{W`?pUt_ %HtXpJH>5rc0Q:]%*{e•t?xr)yyp({Pn+vV/eqzh_~?~*US˩ss2(9 j;[bUVrgy;:&r0 UzC >K:xV&s:Ӂ+^7ת\nOŌ1r <s:I)e,UcHhm^e>gpVo ՂJO{-Ǐ;kHE>ג)E 0÷kzt 9?D-]1 ~^x9:R}2Ri?&Q" \%jI=8ʅ̉=\ҚɽҮ`E'(KžUvױhDeo!B9\XJgMBݯ?E#.drBt`NxbAȫp- DPG5' VR׎:ýG Rz >Faw*x=r<Ȍ |Jvq5 _ Rjn_E#99ڌ~kv(^«wfjt˨^8vG˕׆ iDn8c -hK9 9"pAS %R)#Ar(PL,G xA#Bb3r)Y<OSuʻ-—J j=5aI߼`^wzR+*23gUn4ӹd_| qrܝ[")>ϑ`EbY`ZcaÉDr:j^mTCQ{ 9$L)ٵHMLc1>ib O5F]]/lu^.\+Ch.8 <@<F8ھV+UV tGR,8PODQSNR s (QbdF+Xyt %z#2\*밊ƥ†XEdr( )' =`+P)Zr)\[2_!暴%DcVyB1U)F$8 h 82nvFzXuH&*QO|Ӳ$h^Ar5+%'Xr!  ,d 1+Q5$(QpXG MZ#~<,=+IˈHzDqsP"V!!Ɓ3uS/ gN X҄Ij$fXvik܏ pu,[i=.v޺܁Pz]LEU>1>ίt/6)c|7}Жǔ4OxVSBR/^4J&b(-` ;Jy΀\YY"ac(nߊcT-lR:@Si62'`E1)4q) )bJ+=; |frء9 zqAg eUx۳D%P^/MRBwz=W!'rhCSj{`稸Y&p`sZвm:ykMvZxMϼɜ(x֝ӱ7b<ʫq;voz5ggSyg w mmiq&DwX%E/(d{IֿR겥uIųѸa>&@,ekb/'ץݹ5>F۬H{&wD$aR/1",DDꥦtHH0<)c"ҭz#>` p 7\NvkPi^דW^K˦{,c6ݻ(0 XM/7,YydcOPYBc(Q"! %K9 QS^VGڐr[@;W{z[\6pOSw{+E Zn]MQfy37E/ݑR%sd -˺r'9L_Ls´bJoz줥>66j%n1=u n{..v݅TQc.W 2aLl$=27['m&dkfp$kM mA3h7Rjԑiy .^jW* tPL'ׯwL~RY M#(g9SYnǾЎ@w2^wv:Ptԉxk' VX͹R1BRm-1NR4ȸ];ËhU o[EĖP :aFhNPƶE&H%.w,)2<7[>mM|O:]>+7Ris髂;IN3O,)0PZ n< ωUǰ(%459>|0ǫxr(!;'1H$<\ʟqr[jnR̩䢤L Q2 R:93}?fvpTj%.gBZ?@L poMo*Vglyj{3/uĆuR]*}\ίenfE5w*u!|gz5_~̪IOE .,=yql̮WgIV+=_=F)޺yI=ܵbc3촌V2*3cnA] `F~K n[[ѷS P]]gaSYSvo-Htt4̫/@H̤NmF`Ȧe1F|I ֊Tsqr2Wʙ3խy·ƛY}rsaFq88=xK "jNߞь0:f cr$ƗtjzW8Y> 0L\L+x4n]\ ~!WYjEڀ 9L'.u}U~,mƇ8W;wp :U nUgQu;s3'߾>?'{{Xu JIQ ۟0lkh*57+ƽ+>Zϱ4hnD端oFݠpuP\nV=Ymۇ{mF>WljohPd﷩R3܅/!D"/>_=$(ިFMogz(WE^T&6mJ`Db#a$ @٣;n2T+=W6AcQԠIm3fKێ.sv1Z[ r$16XWSjҢE)*IyD'tH-=CD7]Mi{e{kF]k}NEy\Z0tOV2ۂ㌎^ z(<:}_4Bu+8&v'o }% fp_pĂA<0qh \NA:?3G}jwE鷮XX󝡾>kc%|Ec6o \oTb\j1`&Ôsbv<r dw^w\^*iXCOurLz,Л#jH51VXosrY{v3 \;agńFы1HL9JjCL8|2VYlkQBH۬@4Egq&'7ΆH;;: J3RT,CXl&;M)\TVMExWgiHy_:G=dA['ԤOI114OdGP2OSH+*_Pk*'Ü'vfkbH \P_?a#zPO { M%sJ2;!Z%;iXn"GejւkY@c*ZC=3s NP[#eq+"*[oۭb|TdWwE€Q# h5yQȖ]"wo$\r11(C2'M O&7b\BZ%K㫕*%n,jֈB؊&MSFkR)Ht6 鞙`#R(TrMg[H lq)m1x0&vҰV ޽+xw[cދqz@I@oڽ7v|=9Yhou3J Mf&w oeY&d.%<l`4SM{EF'M_0 6C* &M艦 -O3T<Ȝ N +sxEW BOfl@XKfRa`4,[ГM}\ 2D=tCRD{eCk\|36|.ŰV"1>x?1` s!w5k~|:}m_>kP>#OmP'ΑLX=~*bjRREKA"'qlg[_Li *J޷g v{t}J@j(Urs5$!#HNlF97aHj5;t{4ISvdnC#|4Rk9Z,Ng $F|uBblJPCRaO),sD05-M }@.QbFg?ށ/ۺެ}qMN(7sfACUe4{D,8/zzY:S#di/Gۅ.irvaV m.aZvG暵{Tꏓ7V&ΐjO*xt.rd*Ǘ")_G&!5ؤMJqkq.rIhhT]wu<'W|dܷ?me+ ysd8dO mq-9YDL)4ZqN.+Dxݶƻ4hƁFurז|8ȗ_->ߘL_>˥QR$}3Z&o` ,}M//zb *hBBXFRjkU AV HNr&ybEC!T//~0=ّX&A,*^CNL:yl_Ko^q99qr<{_l4i52 GSsORPIV0]t 6/W0GC}DTCtSڊ4SBD1JN`(k۱^g^%sHY 7n+Ziچl(!R(L˹ű}#Y YE͢omPx@ڨI<&p8*?.q#ӏ]=" 0{#nrC }3+Z;>W4S*| W.ŋI,E3irX:m`8L~a7CZ%~qvwoz$g$ڟs򺏚z:t^5/FVqD~Xs|0{>pmf|'w T)v$o ~* u=NdY_gt:A%6dIul])"f nU+%c[oMKUi!-՚VEb4dN6=Az;JGl|[/g' {@N~LQE1g#H1 Ij@T$Љ@!č 7vIx."P6XS}j8R9 P d[M^&#+BCRRbLf*pr[J9Xu"ܚ:t[CGl&m`SʅCWW"&S\llXO'5a&B -ܨjD[ 谤ٮV1-ò1miDuQ0"P7h نĔJ5j5BJd=Tߵ*گC[}Ypbм@gqb*לwLKg6.2 bR3s5ga}s.wg>O* 黣QLh)m3fKc^dlGVoalӯ\)Dk_vz񻵋[)Qnm 3::R oh &Q^]\(. |Q^(/p҄KnŐ/~}4Bppk.'M OJS\nϹ 6AS`v^L})·ւka!HT\1\kA!٤٦ H2Mh?O֣kB<}oMKN<98ȏwm-WܻNj2Jm2'=ׂ6yƐ#7Lf\̱)."^q/zAb&A`\Rmt`)x-Wg%!ʐ*W<-5$9BVeR" U @y(OBqࣽZ|r\ 8n;Vhd."K΂Re;9B&'σ6(09:^=yi'fwnjNY m&SˎD7$^l]g:'O] LR#;ؓYVuGЏ ,xe Zzz2.> ^&Sמ0弜JUGgrpɥnO /U$}5=Q{*mnF#/} S/l,#z`en[+̺8,0 Hh—&It> f9fI)XBRcDx>&?N!VNIު3m'tSLlRMY% VJbN/Ȳs0 ˅]:E1J V"X>[Xܥ1r*`@|ݖg g~3&$"!2sS 92.n<;10J6Ilx57`$ ,`++<̀ #&2 l2"3D4(騴=#e\W{.R_Ga~5҉ m5t8JaGʙb9s[3 Isw|<( 3L}Ŀ)g0f^]=r?iϞ2_j yuIr5)E`[2;lgh);~20:ۙLI5ݳV3$(]>2#s:(x=@ oAn:ݮfnE89ӑO~0I<VtD+vr 7oOǻNL JVQ+ruSs3S;?NIW s(0Ե̚cF=m}~epv 1Y h?[[v須"{x#VĹZ9G:\5X=t+Y,ZE+XVhlГ1U'jUsNʍy h$O'$򠜈;|p4"ǨSviOcp<{<}ڎ?vw.v~xi? L1bYI7 ݋ [ mj M͇6Z|qE)*8w.Zve )?~>8}?{6koʥӹ֜݃a4H懳 v]T2U傪xZb C :QectM.Xo,˘(ga8yѷ'^]]UcZX̩DV*2_V֟OGG#;oۨ28Ed XVşMf#Aj &yŒ8=OGtd.տn ɖG5(I84d '""I$g- 1(brgFL_BUڪhj{ ˎ53*j]_jui [= .nYݙ;CAF0S!K/_x|jNU} }O>fZ!}3ݏE/Gd7JrB~jߏZڹ3tKqJ|R(jg@9^Z"HR0o%!=CV#z_E^,_ =[!V6t|^o>jU݃Yo'N5:8nw'L~q0dp:0d{j{ٌ:/d388#klAѧa9zn%4!s{xZ=&'گ$n xhXAGM׸ޥНYgV?|Ǩh Y  Y+YV3ar.FijP\-G=d(DԹ \g@E"Ykc1 F )#ԫs-wY +v]ŽPV?"$~͡ZDJl`o=n}BBrT) q/mZ8Cr$t G9ݦf{D7a:x,$&b 2$%0S5cL+ :mQJtmp9! 97P%xm$[97"CVq%2̝gu1z}6$ЪYR5qбJ;㭣l4ɢ ʬg959na}G٨|x2r% 1&2)R% ʻhU>#.)c62r &mH]p¢1xGKHXJ杬gِ϶ϿNwF-#)(41&LHƙQ1h!3/$MY$@-S5Y;A~oOiQ+a@:LަȬZ)5BZz3c]:6Ԋ6IO߶ۭQ9SCb "D&IR8dB *"8ʭڞx!n!2R\d쵘_bj-2A:62)i ҇l$rEY0=;!ڊC| @/z-WmܕTe'ـ=.q%&&8/'<s! dtVr$Go}cƚxt|-Qmܕ;te!(GbjW;&MdQo @zg'r6Q$9liq2ׅv̧xk8$x ETk+t=oy :7Y[gbۮG'ϕzkJ (GNGES@8L]|I'l!`p۴ȭ^^c|0><^ AWy&[\Dϣ@l`E୻Fg7Q=z"HeHtLF8SmG,*5{@UT*#Xyr&b)"`e`Sz8ˣ(UL^-ӄ(Kx:g d1tSV^QxN𝇉*tcNh^Afy\x-H[eA="T3{8m$'jr@eH)CC*B75*5}NQ޽(xwq#-1^yj=v—F'|mYH뉐sF?TKɬyu__EH.fh2J5^d0vQ`_)ccV67[nQͥ)pF1UJ,&˔3AA_Eb戂,>5RH$ F*'U%a:Ѥ)); A,yfmZG1b*f(C*ܳɐ[d- Jg3*oOf{(ɓ7?+XӮR*57rC3ȚZZRk#KF\ RT 6'Jbv"r S+u9 ma@e, }8 Bg.z+Ĉ8}f\ R5_^L᧎n'P[ M7{_j7JheB`t2{R",2#N2\wm;i '@?b1[v%iIvIJeilK 9DXyLDeஇ~ c׳aMn*F&?a/ `'ʼ3ғPg_~n2;X*]TD@;cg`E읟ہ'H&ukZ,0ݤ}l?L(j?iGvyq2]Z dNs<:~HrT4//jIrLirvZ]9xa7g*OHA+'y?!rwo'շ+ZE5+u69[kdy>#Qf)`S.^ )lZ:RؤxPkH _)SGW"0 \5q \5i7)W/CoW@zy7'i馔[n:-Z :dh(@ԁs!XNe.o&kx  "&*5Zۖl2-|UDFfE?}^/xf0n$ZoL;]Aatj/7G 6 r4{of~vv~=*K8r*vcyia:|pnD{௧`t"ZݛE{ʍ=/~3⇉yä}._0)l/vŻ=v|Lp%3ڣ&6WMÇ&塅#\= \!xDp%{GWMǞxpդ]FȆUj2 \5i;t)2#\B"GWM\F:\5)•W_=j>gI\f(嘻zpe5\5 jupGzpޮeoA_nɔ@Vxi1}bh[}40uGEIEJVca-* nj0a\]u,kKM74KcJQcEc7 ۵V48T,8VyА1⑂CyߠNbn2KFoWAs0~ 8DeW9Y鰋=NQS T y\*AXSŁ"o"G^w]=*]mhhYupJyD"5"4M=vj C#Quͫ9ec&IħmWuuS[ӌ[7Ք\-smnE+'q*~/MD* $8s!.UqP5KdbU#pn;dv68G6޺V<9ysd[4+ >/6Ċvz\D]zҒ}Z|6Vär)Eug;&Ζv^-]*–1Mq)咋g>%hPUA1TT.m/׬'َ<@O_=[3?Bt}^FSUkPV Qޓ!SVGՍj!o_o+Lre ,9bL?m]>qUaF_Frqy'l,K܆e4\t:qUeM2wbZ&{e-|Z.6HSVu$kP|(q9[Mrd*ΨJ_FDd =?"l|fe|[xDnUS N:P̄NEe)&\Bm4G?&cC:;|(MCv:kL^ΛcMa6~!9dX2f]jtcwh&$}e7j&j}ՠ\">9hC{)Ci(ٙ@@A!oIEc[&Ҭv3Oޡw'jיjHF*j^RKT5 ]SrPťMDX#|Y%%J*Zƍ*A̩bNN銝\7qs_ ѹ|a.y~q79i9/ sy|W:n$8_;ٹ 癨6Q?'w#ڇ e^iw(9\4traryQ+ux7Ri>?;/Յ&?Z{- nԮ*Cg|QX*:il7\."ǍkٔUnK\ϑz4Pߝ2iI Ӈr:O$"drly sN^lӳOKM}gp/ɹoȟ>)ŬM:rO2}i_rP qZ=j͍.8VtTD>pۛSmmT/SmTeW]xV(\6)eeSEG)ZPՙ+F{N<=yF{ftI/r?~Ron?zm,o ?Rgzkxz{ӻm QEѻt}s5&F-۠vc~Z}:Y~mَL뙘uLm]lDdMq%"Z%"_aH֬S h󶧂_.>U,U=5fXàlD%*eTNs>TNTP6Tmɵڵ%_*zg9x5┄1X(`UQvP'v\+=@096uy|h.?k`1R%rw/U^ߒ1"~سlN1?A\̰$_.}=O?vLm."#X-~sJ?KRaJ>kTշIYI1R!Y䨂}KY aZҜJjK:; ma pV"PL?s?\0Y i8Oog';aۭa ݸǔpyay=5ubW(&gY!u AD;JѪbJŹPIwuf`js^z M7r@k $V1TjĹ߂oY/ggguS}Gk_cxG!XDwcd3t[Fk׿Me6.vNZttmR(E'?X.:w8k'Y^EO5\1`%ZLAJ%S|f/TgӂºV LJlZ#yHTͥ0+I^gTL`fJFM${s?Ly+p~6=I wME,"C*JR!䌄`X45R05&طġIFD7UdHGIdhrA㝐ŘԻa/qL|KAFK9K`imjNGщcBH+ ؊?rTg)B6k*+P+el**1א\C*T,Y1AKdћ{ 谤ٮ1朇ecۘI{-BVD3j\ MZ#ֈƧ7$1hcdc(Ce~EګNWvj;Ac:t{t /󖥘5t욄t)%fyt|hmiE/Svcy !^+c‹E΃bdHhpX<η~^A~iU8 Zt;yˮP5YqdHcN!FDkPc:xmkS˛(oavUu3ז|8vW:Ot@#!ؤCƋ/mnVr9%T&վByT"X1S1nKPqf&9#M-μ- U ON"=3d3$G뙝IyQLtC`,]l]tbPsc3y+Hl~ڷ_RŽ+*ߟDŬ ֪ R4hp6-M[[2cAİtEFˮ\cT,eUP8q9̵Rp%[Pz#c7q{'Ky4cG,43  %jvqR.w.fޟ$_!Lg'؆ X!1O@Jا*;kM5y%XyJ؜}[$!1ںdu賫)I`ۚJUq5Ĺi^ .Ҏ]QG{g2J·vksF6x4E#\9(kBR . Iܵb.*Y, q !jJTz/q&x۷I. ǮGD-Twjr!LcJbNk S0YփkY.y"gPʺLu# gbb)I$tp@u茈Ĺ?}(2Qpq>rݴdW\θG\q.lj`Z( $MTkU[Ped[(IpF\| \<<錇;N|~yGޛ : H]rhb+E# أ! -*quu"$2ŋ9.#I9ḲB& q))(ǏA*0(ZSpH #}b"|ghf P MЁK9wMڙ8C1n2jS!A-3|^#SA՘)ϳ *.%*v7ӕRʈV<4FsnXpjuEy;Q} ^Kۛ:AlE |{ϊ1úޟp|[ohž8 䇋.p\<"N&!")4Ń)DƅxG1B 6#r Dfs& =:~&kR:D>Y /ĸÇ={]gƆ2{O*yݝ:VПǚ⡺:zLJ()cTXus)zJgeփ[?uX’ͫ]7wj)|]rMF V~O--ioz QljN?-)! x2鶞)~U_H܊oB4Bw`NgX;C9;-* Uި\YY9HTԿRO]ﲧ`j'>A;:'N[ QzBѹ&`DBI0ȤtZeTlrR{P(ŖS@gidZn=j:ח7J{ *VP$ģp%B n,@Ksz+/ҏcDlinym璘(oW _6T56EyOF'c3O4yF*@ZV2AE3Taj%i2O篊Sk[en%K⁠ %L8bNqF#C&X^q x@3xJ$L:3k@7hdqPDphiJ#A]IsmpԳ[!u'fŧٳe۬LCl:b{25%tWwZz3JZW`C[ jt vm%ߓB:FAq{vE^A {I,zynpcd8%4*Y@k/cW$x%lL⢠ar=!y.4JQ#e ƃa/"0 w)3q6d9Llf4i% VzWq Noڸ5 fek\|gUnh3}JzEA$*R*e=:A/n|p72xl.2mz4SIi*=nJjDQr5@z'{>*#8n&0ʺƀ&e,H{eL!8KIVʭ n4#3(L_fs-M) QO0@K`ڠ) :(0̫ȀI/$EUmb:{ 1FLͣj"n~=܎ Cf80=wݿV S=zsyO1IoG}SE$oR?υ2={0޿?w\O꽧j~#0LO^ SϦjI"G'ol$F:I<8=ɳ(&Y|J8! Nyܻ!iG G쎷U@* dCQ)lrZJIsjݵQpk.!qtr}oOm^SJNxV]5M6h.{&.ݻc̍ھq$p':ݬ9XAvzae{﷥h(gRPb; j_M>{H4>ZiM<,nO<ۘ-éjRvJ1W;$;%X"Go8@bO,P,ɮ#4:&Dp)6$CNv*9e|`AWlKE|Cnu:Z-%e':zR *]_ q:I`q U, *m+'AQqJCvۦGE?g݇Ƨ-µ/ bNiMBDԛMrP%-EkNԇn#ur)\ІH;tZ;c$:'-*D)('u& .wr&iz3_2^\d;~.}%3ǂ^iGdsUd_z7JIIL=2W(R|oUw)PZMaUbBs1G抣C*ޘ,+Ԯ,%\} fUـq fiusR\}eJzP`ޘ,:ۏ2WYZMw\e)(+4WhެTs\Þv*WqB}0 _n^8FIS4h^i#***{3r(A0M@o?ߊsGfFLgqaoPeV]7YJ]F3Sћzҿhr1ApOo͸$u@5µmbpWFaË8 cf*-@ Lu;3YFNrPazIY̷y kצcb3 F\?|@ZXZE OL3Ub9̩\-= l5Tj;e)_jkgr,{cP\N\eio9g)*1WzW J3Dm/텷^x o{AG-:21v^O??QL8J(Z^9l%)a pEH)`@(縌$I^& q))(ǏA*5z_e*F" 4'M gRXfq9F@ dZ:Gs?8B8V ֠hq9UMhcqPÆqnіsc1;Sti?*.%*v'uJ)eD+p9,WB cV+ۉ qhNi?_ӿb Y@ ^|[o(ъBX!Z/D (|׉vK!ZxD0wGW1| ?\6\B8Ϫȹ% c qapyлMxj'JJ1B\qІzt\/̱BRV(Gd$&kR:Du*Fi&rqv9?Re0*QY]Ks9+ '舎=fbc;:qAZjIvo_0J.UDY)9L &#Hl7rL[)ᨫ19wG@'iĶ7fv70I{6#7vk3{k+|WvK"5\ퟺz?ksIwoAFFyg1L8osPF2rXs{'_;+̼5r??,..̷?3`Wg>zM[O'57zhP:˘oe;['V;k"۞J|Pb;鮺Ϸ:>ѧeSOP⇂Va~HZ aԿSO=NSR|r\E#|Qd"d ڤ\<Y8ʢ@D5P2 !>XN!׾dִ|`]iBS4V NVs!%ƔΑnXP(M)D QlCK˫͸yݝ,hm2iv71xWy.^ W!(Sc\s90vO/xV bx`QWb'U(yPU.!9 ƫ,>9yThB iݡa~egu?\2|dzJ=t5&%_@*'Lp̊JYS)t$YQ<2g`N^WMW떣Nr||s9lҷVߎkpɍ{+{4:k5FKTdhg\Osk3ݣ\8\zcOtA?tQ*$ Gv>D HLj+bIcst)0P1CdHWbSؐU&N+.9 z_7=μ@O?3Gb %(oczA.(i8wNyo Hǀcc7WgSRaM;%MWے@lCr!MٰkR,ň޻PQ:Q.gEqL*O<0m;#MCA6 YY\r=2-6Z˅R4(`TҥV8/""}XreSn,PsϭYoύ Uq^'Iբ"V :.5ri~q ƮLA:qCw=A?,alw2COvܓHMBœlL&QAd͢ȕEX=jY!{`q4\c4MJU7'}GC:<(c7n|O.NJT+^egk\R:C%9b=#n/Eܵb [AZe c(ȏ[}ޭ9ZaH{|}wN/8 v63y{p|,V`] u}}}0(ZBipΕZDRz(3 {t='ڣ*Ał:ʱmoLـ MA >$&źo@|{{t_-"j&?2H]of\x=bP^( 9+|TZsFSSR%FeLFb*f!vDVPxfQTGL]b%1X9AZ(&Pլz컑s߉^6uW,ދCwѻٳ|& _늹l/+ˠbQ F o?YlC~z\[Z*D.<9?_П" %<\8j}Zj7M}Ge+2P־8Al47ȹvYze{_%rE4Fwv]mDXMsewa<׏G?.iܴ?ZM/gޝo`PGmՇr|/8?Z剎#{{>>_|X\*qۛ=b=d{$;pC>|:7]Kf;\-.j-Y֨2CQ'Ǩmrs81h ~[.+^ryq"(!-m)Gs O˽k*~^J&F, MP qZʄXŔv= I|ȝ.'qWzՍ'0ܧc>bIk:+ >j:EGѕkpwǴn~Z^䳷ݒ]KZ=?=|GU_wxȷaS~O"1،|Pnr_vgk[^{ Kꞝ-3s)=gutJT7pJVosiX!:fpxWNU%llV8#4e)p1T;墫pVboٸ\ GSY[XdVp-* `jq#qN{vڤya]VhdÍSAz׶3{&ggBxWçG=GCb.C?`*uh'R-Y[fxMFԋxk@P/,mlgLHŨ]wUΗ-D𙼤?tH!iU^z熤k'1{ky2'ISM7![5\)F.Q9 dl_e_2lja|l˱M J[^ˡ}rǯsV7u&!3A`17ѯgg ީCML5Y\9g'ޡ&`&`-wEJŵ̩5'b2%8ȠQɘcSx,Z8OM@9 ,TUi}9|a+\^[ vJ+wvqIEZd))"B9ckc8UE Rvڳ%=ev!@bUNVʘ"þO!P b{8E΁ AfIɵ6@hBC"j4CIH FZx[* ݅P7G7Ur!RXSL(\REECMK$t д|\1,?iɘeLlPl W(ZE0mڥb Ά֕4֑ÜBAgǚelB2|SE~Uw.€{As8p&߮RYِO{Tsz 1؁+(q@F J7Mh7NnBn"F66`%SQ1 4%"C @XS#vnB]QE̅,*9!Xˠ*CS J MBG(F΁`sʞtI{l'?MաWjU s(9C,ͭs/φxG7VJ/P1_ΐْIƙ-xrBP1wnдb,,T3Nn|-ۨGyAñzk+l"77]T<况/ zW|tM-źVe蘍䢋![G-̦klm<$BuՌ>ZJ%R4TZT瘣 "钸Zm \K{pȆD0zcb[18e|x`#P*yȭNLw<*bq|$>y9v#1Fۮch5m?Qrct V+\+thi;]!Jz:BR;DWp5 ]!ZNWҕbvTuDVt-c?Q޺:J׎wTKQ. U-VW-X+@<oϯ^f_s|G: wFBqzSG/G?m="y9',_qj go׭#voo#k.bKB^WrRy9UV|0 ?{ojS}F2uvI.E""[xm= ZBM ,rFRu@ʨP!c2bvDv'p ]!ZtIDiz*ilBjB3֕UJԴ!t=>wBe♳ɳ.F.3HW{U ҴծtzCjƙ]!`%:CWWˮ5t(i)]!`;CWW5t(%!=]!]qm͓z(]`mg JBt(J(5GW 5EWv% lRߞX{JM ]!\ڙP;+DP1ҕD.$3tpuWͽ;Rў`'Zw|3O4Y_#/tO?, ᅛ-^\Z?SQfQ1q*Q+%D(>8/><8+ikm=nx}(d!~|r@PDm6Wm)BX|,uh0`͍yMcP NZVJ)KlGOx S,enoߧiDjJiw}Y,¡@q뚦nW߃=wPϪ+Xpd )>;Li|sQMIkbqE4&>_]_F/Wn{k $:3wueZCe}$Vr׫ʔ64Jv)6H\-r]>z/GNr) 廒R].&̆ 91(Qr3 ׂƮ&atY΁F0\v]l%gsrA+0WwJp,lyS}x #ͅK?7"] |= #aFY[y룬n;i^U?vOxk]etEbH2$UTGRґ(b2 tH}B\&.%\h5Pg<*&& Q[%Q4+ٮ+ GSS^fM<2owZ9{ G9AYp阏.6Mc*I(\(Vz O0$HB"Nj5Ju-(paFg-`),#ʭ .U%:1y:FbƖi$Bx$I9ODIc3iKV zkR\Y[-?]ƒ:7oPN2'`S~TweO~qro`qwW\=oU FYoƠc7Z.gQp$>|T8$*_65_?+<r{uJ]3e6uQ"N aJ* lnv& -#*ЀlCp]R-& IxP͸^kbBÊ튚>UgǮ \M{B5P깧 kL5*AjL+޺-%1)W}waw(+o `<#I,gm,gK#M6 qVexI9PirQfTFwW)I#uֳR49 STTqG.I5͌!1[1.lL2BIBsrm~ݐ7q o)0&_Fr365Ik".kc7$j\`f:H4u 7Xs7,3"#{&W:(ᕈ#f&6!eidk.fk,ئP֦Ͳ=k&R%2H7WMx"-H 4ii4Y*@$!eHxсCDbp ,# T'\||X92ƄFlL>eD0#{F蹹,`8w9fL,p'ہY9TIxp2FQY1>oh %DiqSTRTdBO ͝"y 3bcp3]b\mu6&%hyϋ=/nx/{v9iJ`X+EȆD0zcb[18e|x:GuVe<]Tch`'[-EEC\<چRRۑ4n0[Iab t@qGUmNq>ǽ*XfV± (e_PBZzr>\EQk U?$v*do~BHԎ?VTе=St=yq[W.pӝ{VG{<]U wWeC b6D~p:m)և%Ɨiy7J e!"9^xP E [2X%RZAT TxDL3&sHVKϵ*CYL9\JdQZj4B5Q!FS&FO'Pp3%s.Tgw+ JcpVg8f(]||u.)Zn4H~S$JwtѨJOxf$W0e1B 'A5(׸aaJz^yQex{y8]?e  Pӊ>L@ɬyB}<ϥ>"q.  s4^ZM<@s뜗-Z,ZYc J2\Lqa, ` ,Vs}= 8~e-7(u_:6وYc]Wemfw/IxG!Z1uhm.Kd6ޝ#.txzs9wmI_!e`GÀq:޳/O1E2| IQdSYcrivtWUNm^nͲChY@:onMﭞմCK-h0_t{;;eg{qns֬_nk ml锗bvJ˭6I6pZѷ?`(6H LTC ek?SK]5Rj.|&:_δ_ZJ4ύ^}~[UN4eG4~{&T)Rl)&8/! 5b0Zk54F $ F70bQǒ64HԔ b FрG@2&"3=#k\1 ]R\{Ccn?wĵn}mT|pgJa^sA+`AcahT^cZpxbV1ֺ@j˒] 5(2p *ckȁmZ IXQ4|l+/-1 yw'xp[[;%we5ÄfU X~1 Zvк `fALW @4,VI_|Z%[띾pR| R¤˓[380]}#Xa(F&8 jǥN ?J{2(PFXؘ@ 9s<KdvbOZ8LMҮ6JЪ*pvM|ZZó9WJ-E/KXg/-LBO;K;?)0@wtvE'@ +v;׃hNj*2zyT_l'k$_. CNYpYllBF f|,,P fjcZ5݋ZPH&w]}_7?"Ov>603 ^*R!,HHvS[ФFIKT#&܄H0`X1!o$\.%6!`BL1jvV:<ks_eA>?3Z+`kpj ֏wڸ~-ӣ.Me)l.w Xq Tsǂ2‚\a<`p=QĜHNCZ|$x>(\o:$| ? `@L2Ra4A!ʓH&>nqǒ"ښCc .a8Jm@E2Z5H`G(1T GH9½-? ښv$4X2fA) D*"D5q2@ジV >1Fp%ƈp{lBg`_=.?%RQ:0ig"'H\IgUF4IXqS$ q3k˥~R"ȟ/{w<:(b:7'G; KGF^HA)ԌOPWY(}]tb !zr][aB<_'. I"ci>;?c:LvAw]Ox1f;rm/iMJTwPH}0rW3|; -=uݩ}Y?{q%_~?}~~|uw=9&^| v L O7&׃ n chkh*|jw| 4hnDW}SwRun\V3Ů9;}_A̯fgCqU]T&U *wKC6ߤLSlclVrA_Mwb)l N;);Mʰ/PݞMc?Df exPQC{j)Y?2˟߀yuT3 :%@E[AGdT(&='HjhES,ޑ(EM4< #6H`ְgX`?4]bpu4V&LJIjB5**O|X#Y`吣G=2K5 }P)to5 ^.ytқէlC7)a>-9K!CJEv4\'pN`UEp4}΢F Ȩ`ZJ l1ڲ0Ə5l*=dK!sŅXD.wk-H T"|n;Vp5sXGq|6kue;/JbرDnVDޕ3X(&"}UǠ *ac>쌧Q02/Q+C9sP+h(dOswv+[:+8 T6)t23Ń \`)|d$}:(7˟Sn9R8QJbZ(|s}{VcUP0vY4a#A;HysQ l쐢C!5)W?K㖺Sj-|'7_Ֆve]c|$'uw2[hw!T+{$>X:V%z?wBVńTB󢿇ju_L~e.mZnp{÷Ȟ3$~AskѪby.P‘[>ՕﲚZ[Ql(O;TIcC%Baf K`Qz-%¹A埘lNhyeT`Z {)5i DLt\tӜ0@V)'6]9q.tBI  ( e5BIj 83ˌ3T7Sy.rFI6)_Be(K7V)j#/9z2e*7)+?1 .Lq*( )'S[*(zI@,rgCZ&$ ϶\Yn98`sT,RpcYJhJ ZYnd4* %B# b!afDԩ$#12TQ 6Pd4,gQΚXKf¨ZZs>ƙvNH %26Ƹ|HOUk oE |iStWz=l`K$rɨ^jQp`V4^3.:jeZnkDro#`-' )lq`!(9(uhGѨ|lRzM>}R wx^4dV;x[c0K4E$"_hY"M-: cR;.õ? -G}U(N+L:8ڴlhL^ޤjPKBѕ#[RTxGHOB. U yNJ#V |IVd˓YRj|֌?N#LE>z#+lrH~0^IMr:CE$e[^~Kr$)J VwuOu1qJiێ\VvKmY00시2(PF#ʁY~\,3C:zQxCx_"dU=V߶8b?E-ˋw\6n>ߋGoϸ^͇YX P}m* 3'} WD$_z^ڮ^a?iLػ!?#j2ꔞZh`k.PH Ș8P=tHbaJ;pK㞁X LC2tao jLﳥ˶68;tfwR,~c?0;yMȺA-:Rj38"DYʲdw1gi[O8@뀕y0?öCZEx^aq0KÓ]Bpp ;̭G  'u9NuұS+/2 ҩd  dЂFzpLV҅iei4aJYAǗ ¯9|Q _qED"J* B|O!GJPXh" 9"H x< ^m-T*7g<? ]>[o]sf0?o/҆[FhĢHE.G_ŒTsLpS~S):cj~?~'-(N]$`fLpM%B;XⴛcBmBFu@ UK s2vZa̽7LR&DE9%aC E/X\[q -IgCɥ}MIp꾝/?<؇D`B >:EI!dCK)#W{{}.[%|b3ιO%=Mf ĭ(g?;y38%qN{1GNYuJ4(f \Ԡj K  ݄5;+%Q)TjRq{d7W)Zgzz^iT)}t+Ͼ[h>H!iAa;)rHR@Ax A)!pGU@ط41DBkM̨yϢ F刊`) P"Mj;Nklwgǣ4嚙{h5:EQ+:4Kvfc96¤ၱiTR,R0b)8}#&roPDd}VSk!:ƠIR ƀvo7+<VV][~{X^_Uݨ z)ñCG0rrg8 3 5SH8 fl ށGG6HI."(6a{B(i3!eP 5~ ~dr6#с94<Xջr 繱1̈́\|*{46ɕ&1&H1 >ZQj<3TD5 nQ2:c-3QhJCMm&.:a/g .i9wNy,Hg?6P71CoGf06%"X\LoF4<ɮf2@ μĭ|?|2B<[Z=F._1HLۏ⏌σ6",[G@)P)+"Z"K$`ΈԹnviR!`ʔQ,M#."5Q2AK^>LG kaj3jXee4zl5ͭH[V[#f!&:+Zs8kW3Zf2+yVƛ;/=/\z0(0 #QZC14F $ FȻ]!(Lei}Q j_]pkޯ9~gӧR/4N..܉4NS#:*$W+`)yExSIH%w ER{U"aFe‘S42KPT 6[+@$휄3ZoHpt@~ޔ> yw<:KIZlS̕|GA"ƈHzhLy F !k1c !F5`ˉ;ӎ=EH/\6a:1Mikf|$z٢O-~xMH*24YF, fA<t9lR$s`nT]ܛwڵA,h6άPbʝ"dӤCwʧJDDR B&h음!X%iP g4.mW;ɁӐNҝR @?oQGTYRc'1蜒Ht:9V$V(g+O;B/qg$L~Yc[%k,jh]sy+َq ^VrZ8(a]# "o=8 $6dxnb'4%1%meNO81tĻ #E?hv Zڏ#`\ R!'q$Xv!cFY; ZDl h̤G)ae9Ms ƦWT[ϲcz'ןqLg z Ο|ȠUb:HxU!ExhiS_kZ ;-ՙ1perg0kvؐAEBU:" I&yQg`ejԺe֥YC$"JRZ¨2h `Nr"p((@Tv ap) VבrwerARo_[GM $fRdpǮQ+idzk8+ 99O9{Edfo~_,;H1[bm킡Xmuó7f8[AbՌhw#] C)fY>Fa8e`eh8z2ٟc~ju͕sP< إ' b/YO1+[rLCSQf@9*VSd9&b4 )Y(CPAQ^悰yLй: Yl"Ykcqچ~N7K> g78i@쭏t/+OS?~?P-~+{sla`1:._uc?΀KFeYJ\e}&wL&RC-& ; wӾQ/hdmƥ?T01AwJ3` bB~5ã_h~GV{>j7fFwz{kYG'1_W$ȝHn׃.6$/Xέ6,ݧz>Z+?mF?_52_OV_ZE>sy59}?njQ&6dFt5Ij*w%y o/;U8UYV{ׅR dP<d =N6Lg)G\|7k0ŇhG5 9T;te_&ռ8ϖ}?][m?*H̹UBBˍ{8irVل{Ӄo[ڼ6lXeQy'VQPb.qfk  صXuntѼlW A*A*2Wpw^YΝU GP;K JeJBVeAJ6|uQTT{bTEAh5bTCp$8~ l%)pF1UJ,&˔3AA_Eb戂.fBxW!& ]OŻTvyWxh4Mo׏&n p`N#^ G1%smC7>"$#tҽoTJR*uwrCF~E<5gY2"F+8tlv"Jbv"rYN*u iS1}1;?lGWq ]ߛ:ujq^DѭI~K}F[&F',E."Id u7hB ES]ҧ;2k<([tl=7FHT95\Ն[oYO@geV۴>hEwD[b>#hEQ0ˀx4eN<r0t2J+$ pr^zrIA~7LpHPzL/ dSɻiUR4߮4DaexKO/'o p|(q~OϻYq@\E|Ne,Z0N3Fuwh}7éCp{rk1vdXv|OuHVG^׫W}JYU5,(~rwOou=oOx~VOʷ:rߕd?]h.N/Fh^cmgۧdK.LG=&c H !ա^O^*[~|G 8Yyg8]ˢ6ُ7 pπό^ͷVdL*#GJKHMÈ7}CUO˗fC_7R֥rtɭ&X>,$6Y w{5Xw !' CkJuq-g_.73gK>;Beom{etTB=&M7_t~6IH^qy=;~iDW)?7Zs YW;.Qo2d1~i]/c޹F2+:Ș;h 9`X-+MwQkm- f,;f+YBB9mKsk%í>*BV7 zDKS.@.2KəeNN]IT~Y^m8{i\HUۙy} MbAQpY9rZEb ΔXzY(Zir–t6P^1E GD/%"HIJZ \1ʊZm8{0lzE0هAW7QI ~& d{[xɴX9;EI&/w[*;Jo-eVyw1BH ELZ(E*CP':zjOJ NjQ[S!jkS\hU&eDl &MQ\dj[jmazmDjIƁPʶ5[xp=lWoi8ٿ6?0˜'qA-u @RONQXe>[Ba,sN0BsLֵ 4 %I0(S05>DFP)ce43.5;L'i# %Zm^j+ެvCwIzoG3IIؔ SN(dJV3qRL> DeYiI r(:J, 5GQtUbj4vJ춇}P?}ybbǡQTYf|r> H=#Z(W)*iM墘Et3ME ɀ΄ԨhcHZpseXm8-WHK.λr ij+ UfܪE;CF̩ ua# IAjHa ,̱CդP{XsR{0a"-?/M#j>HOXdNkT 1[=Wb#gg϶uJ1;p] pŌLzI.I(IWJ|5۞ {P:0)Xs+2&dS]v*KòKM=AGI8g9!Z4YJ$<&k/W% ;tEO>A#XZWfa$(JD^28S5ʃnac+1t[0MBri,UrLB TP.z H ; 'k_5UwnC[e[͠@ہZHv`lہm;mѶv`Ӕض*v`lہm;m5lG q#6nƍظ7bFl܈q#6nƍظ[pFl܈q#6nƍظ7bFlVq#6nƍظEl܈q#6nf7bFl܈q#6nƍhO14FQU JGgrpvSCy Ja07dK7QG/_&_ܔWpz(>puu"mo|ѴG;_E ,0m"&ItR޳6m$WP6P*qo*Nn?ܺTxHl+[߯gDPILW0{{"3U+kuAJǩhP-DsV{@N GT &E.8R,fs l3`Ẻ4he.[ ԫp:RLɁcR QiƝV2 0B'?Pݖ*i8]zfID$s:up[ qBC%\p88#+#'!+_{K4x2fA)D*"td  q;Fꗑ`^ λ׋vl]?cq>є RTpaC_Z$KQ9b v$Imc`ָ4(1lb\0`f?_{C.͛{>z\uQWj/>V "",}33;؟A퇡.wJQtiC(XR9 \] o %* yBc~;0(|ʑpQLL$.ξh%lrBb&|8wF uK#K+NRvhiJea}qiMŒ9\{u~~r8[FJ-938T{KroWE@j_ϓ <1Ce&yLm> խӬ;`DF,`.&M> /- XmYd۬mk%BZg "4<&@>?.>E}\Ckr t6U 79~/^G?zw:zˣw <8P ({ \^ze_S|TSS 2/s-^g/;Э(}_zi0:4g':-&xŃdt%q=o~6?r:q5WR (X3En@[\U:y1iGlM񓖇%/yJWmdN @0lNcc~v0mdQ *0wHsrpw2O~-3 :'@E3 *%_Fb҃{V3fayH#mcd;8P xÞEbD< v"GDL5Br5 t&yFzĽk[E=sE/]wVOo0Zo|F\}Pre^׮x{BX,I X eJ+gI1u[bֹJNbw^[E^ |f+URzQ9F+i1VEcP0v)\iGji!/g)Oc.*5Xپ;6vUE?Ltufe5g>YYwxdt{Ko5]2nT;詬#ʪvҏp6KD{pO_ |L!L/d4[xޝG؊<5CXG|揸^>,Mjiч9EL Y gY]Bga'W6+j9$E:J %nC/cc+gwɋm ûU*ȲvuoG_FQ?~ ռ- t4!LP|&TWEWثAE~X\ϧ T'A: =>DlY<$9@*ϟGh+ Jү>N ɶFJ&(+񟆠s4p=ÖV @{ze$"_jYn Mfx2wmYuQ9L0+0fp2g9{v+~;a8vvOM9.c-t>(}_a߅3[ U B:)u8J~G?4N}|΁(swg|IPƅaP'/W>m.]U0Ճ_O}Z~pN"g4`[*$TɂĔ#"_As;'ޝ?9 ,R" Kk( bMi":Q҈AuY& P91p>eFO7SDc.DpCLD6] Yh6|N:7LÝχGן‹ipuqB8WyK#kxwFNאˠ"  @c`(j(5*%3vǼ0o}Lr;GVPqɊƾݥzxbzu;{b|? )JkmP::e Gb$݉3ݳD!++=k g`Ӌ9|v`_) UonC}{8s+Ȫ?e= - u. eH,:m@m4aH{v`x7O&eu U}Yһ ]hfq:>+R^mf.0ZL n`'Hc͍W{`#̰eٽ]栺S;7]Buf)_vi"e"m5h<< <+"X!O9~Znݴ\yّqȰ nWFz-p5bඑ[XLoĮ:ζ)n)ꈇJB|_W. 3Q2/Wd:XxzT4-ԩCAaYE&uO;cLִo b NHw k{~6+eeli))R;+6`%Ȥ3 {'cVxc f` g4]Ϙ~-K!/3ſݷZGTY:Rc'1蜒Ht:9V$Z5".(rn6x{j%P޹Cv˪m.V 8JzY鮌ʈF^F}YX^oLy绽[A섦T2P%SҖYV$îa*9 bmwڱ.WrclS\ R!'qx܃%Xv!cFY; ZDl p̤KR0#4'(xAc¬Vr&Aכ}zRU3M!6,hqh՛LՠvՈ٫OdjĐ^}z1P56T*TfKK{[ oYamu `j=o~3|7jq]֣%0-JFZx 瘋1\2M9q%V:d ID+T4h[" `Y0$ ~0XsEdŕ4/~$Dҭ ɷmLҼCW7nг\()n"S8I4;pGB(G,X푥Lyo+h5h|\l$0rF5# Ft6\1 AV"i|:,_π͚ jfBKB$6|M_߶;%?{WƑ Ow]]/-_eŴ(R{}Dɢ^E 43TOWU?US4&/Ed SU? "MR|!&SQ;L.wqT)Z3زm=ݢ v|HMJP)3kH?b#25LJZaeŨ,T ˈlYO#z۠ %Z5a> :I_Sh8[U|<-. 9jLX"㒒 RR)\Fd,(}/+Q9ߜ*sU`ȈDA:km,G~f܎zg!8U󝴯/%4/p.#ع~牠\@oa\*17| [RCXƩx1|bYZ[/yr/~ZQT6gOؠԏ.f~X q W|?ÃWڳ$kbL ŒPԿd^Bc箳եqk>ˇ|X;cshj +ow1דa7cjegT~ӃãMx?sr:?Z,k4Y2؜M _{5po kt~.|d?zB~pW,Klq8HiZm蚱-P t 4qc4H\曽_CFCf[Ȧ WF>]ӻZnqrL1O]ۀmxy} 'u;M eDz 26t/Ηm؊n=SD*.#~ļw 9 d5d~$c"C dP4Tb!Z ÿNJ򗺔Wn,A•ه$ིV(0$BzYTM*Նs#Fn)6m2'WgyrDd1MCyRiFo䄯+Gv)uK/'˟<92MS_>m(zERծ`p19`;W5x[^-K2,R="u,f|x wX"clN4TT2ZrˀQ!Pp BD&ێڨXip 9EA>LZ.˅+bPJ=8P_" H{p\=ց]ێiY;W7EZh^N! ߞ|JK.4jCsdXjc;mwbe*GΐRhs M6U)(bQO`}M*C<3hZuf苡v|Rw)-vH17ʱZ_lr|tDi\sǷĬ d/h+O> ʇ Za 4 ?+ ;qe?w~(]BI6HDDuC>D-u8%,u|U,MaGY{r`ne=K>]^sf֤ltSF[S{کDFEZdT&gPT6 D'ԥ߱R; ͎*i+!Pl[Kg*Q&Zl&|-zvA1#7]&7]5{㦫N Mn4'/q' !mBUӣZrL< q2;->IuX=Wz-<e(Vk,_'g|4䙪QnA_͝\+Tجj[U/mԮUʵ@j0rJJJg]J]wWUJz6 8R()LO%iս%i]KJC;ˀ{+j'Ogua\N|QՒOeo\l!q&.x(W~x7/K !dԃ'DiUUi/vs}T_gɿ_,<=`']Ҁֻ$43AbH 8AbQdIY.&2^j6rBRײ.OS?b7.Sgx9uy>whtqRT2zW"H6ƻɞAgZ g-@3}ͥ "ΌPdD\)d%Z|=VL ڢ2ѡH#m[jYmI"(XkvfNt;G'dL=KgvLt;n'dLv2g\+du2NjLv2ݎ!-ΙN:Yg3dLv2N!f;G'U`?'U\ܛ*R\#T'NDԢshG=Zߝswݹ;w=}d%*'d+qnqza8M5C`pp-Z%9nWir\n◷ο.AZE"Gii9ʠP ؆R(!BS[:DEnIYC,F)*e g,d,ɉm8wL| A쎥XAf 8|qN:P :TKAEBBA8w0䛯i(\&zȜi%c Q⣔&cC`M<JJGnI]1T"ݲ1mL-$M-TcJ@.z!Xv9]E~Y7\ӁVMސ 'J#}:OZ3̳W}΋Zԗ)2AFMA1dd{77Z_Bz;T_B;ҍ@("J*Y%Rvi6iit"d5G0()dʢm7笩6XrN )DHə"UIҗ6tDI3q6ot;1ߎt9ϫãÓ4TTN|,xq ) kD7*ԤylmF30 !5[@q1S'YKC1c`#1P7H(uPsZ?2zG~Z!:N'u/zַqi$Xk2}Cj'czX#s|Sg3վˮ98N;oϮCJʥR dj *I= Y=5ښR ATZGmm*JmdRqɖ䤥V^JB_IĹ{fX/l[B}Q}Dul'^ɱ" ?>N'cj0yJ/ƺHFtJa0 N hN[YUC |Bm*mƠ1طpl'+6.1mMJ=vsgl+k˶^dޤF9{|_XN`SraA( "jliGșlؾ6M$rP"GQ] , 6jPl|fǠ۝\xfGGܗT!*>䢊Ro/(i'Mrq#:+ЦfB%]FdJ3!h1묥Q=84#6vmʸv)LK/_IJK%vcF Pm C ZW3FSd Ubw6ӎm!5[? ;'Fޚ ܋=Q,o/ՏOTx->.Mʫk? s&0X;AKˎ:փvX$ܪq=@#k_2@E A,@AOD+D(Ũ"۸3HʞZn aatL M."TwA NdA Y3gğ; nk/.~!}%Қ׆JmfuRhsV)+SwM)D61z{1j4cՆ2KnΠQu:P}u=ybI{F+}$ '`>LFƲ$}El2-608mT?]\c):Jy\YY"&xh6 NXp>xB؁&d#sIՙK0E`Y%T1%2oEpa5_ΙtJ1u%O]LbVЊ; e$Ehi3UŤ5^FOqN;&]ٮďGviF vf^ {Γ3܎o|9V{n߯g5WjoX|wڮ!Yfzhv' (vlѡb^ۮ_ʫ@FX6ξ>XNje+96:ziaH RG&\4&?dq6=Rvkn<$Ngav%"5=7' O'|02{hŒ֐oխxWw%T%}i4 _Rtd~]$},/nkBd.KU=nDJ&ZWM3ȖۄM1M0FռM4- nMޔb.hޫѵۅ7[ Ug[~@ *g戇K_TmU'>#tr0P Be֭ޑH}tu#.fme RURc4PT`o_MSe 4:6W-kl);ݺ0֌D^+Wd]Ģfv*,BSw44f%& w2`7`%g)X*38eo/TaMˑj"K98vshg8R:F`VE:x6duxa~yћ]sY[uj{ C)߮J YVa/*/SvtV:*T+U`1A ڼ7ŧGj/s:2<<>~v?hXhlU9@%dQ ]Y),~eEU*|PIQx `JN6Qh5zEn؅]X~ól d JPqվ^fUDRcMezEOE'zr6)%,CT\LQe0F_*`cv5-AM/ϲJ9ɩ 0Ӌ۷Yp8x F ūncX Jjr+G{*[z9x1*14+ץtMs9NwSzhe 'rM)F\_ -%_s׉ʰ2Z˨TΌʝW\*aVvAU:hvN;C^ݛ;=jwmʌix-ƻỜUĊ GT &E`{@W)%h1rπ1' _u wTL z|:[Tx"e]IE$~ GO7(" (O ;{ݗKJ%D<5AFQj,9D:8A8F!LW8jEI?ll,Ԋ`kOv+ʘAI#JIK@ Bꖐv^#8dST~~sq!є2Bpf/Gc_3Jr\)&Ig/e60 T+5, ?,c?].ToCq6G;n 3v9A:1IV`zM[#[?.AV *ːGٵ=Uf_Kwo'/o֕ hsu4l^횣Pmeb䓓7f!0< NpQ?nt2X9:ͷmU ھjɦUJT)j)}~W̒G>ttL*'l4Y<O7o߿K??>燿?~1&_zΟ`8lA o{@ZreWKzTKS宫o.r &W4hGn)[K(;*\*๪SOٌ`)5qAf3_%[(Y$KUSPt*r ;% QCt ^]Eh,bFkM-S4 vI3#2j0?n 92K@)P;QK9Fg)/b:Ccj H֢S>AdT(&='Hj8hES}, ޑG9/I!y#G6H`ְgX`?4]bpu4(& ANP*\ 'v,\\O,pݼzjETNVt!iՈR 7?ȹ\.d;HVbMJb(PLnL )&C֩k^rvYhw}j|L9K}l% zS Zo1]Ć0XI*bm5*Cz9HysQ v'uN O9D6/kc+@>v9}C2t =vDSV=uU -I%^ Ow E&6ʘ#Yδ:AbAAzۧ/?ZV$ {vIKf%UȊa6p0սvy fB{^xPIݳ\n[.X9Gm;GF~:'R$o?~OdzueIb56TYbr,f6޷:J/ E[=*:f?}dv(Uia$Rj5A ڙp5Ӝ0ЬSNlsm %%c88/ LD+T4hiB$1yYfߵN*9te}Q1|*zSUnϦxwdKQ٠`'x<ǐ3DTCThQ>SNI D*(zI@,r gCrkO"HR7,RpcYJhJ ZYnhS:*6TJF9`Ť%BfDԩϑ:hBJ(2vLgӒΪKۂ$[s>ƙvNH %27Ƹ|HOUk HK#hu >l`K$rɨ^ۤ!% hZXx -u6TU3U}brVw [;:dXGO|@',4J%p UIg8T4p+K cBq^ss"# ^P#S c$(ںl0E|E}!PdW_ y`x#'[x0R1NJf%'utJu{ayYypVgKP,ĭDQzDL5uNuȹ]BzyG7SZ;f;*C:D+b9'm$QE[|>QgTlE]ɵԢԢZLE֢^OvCy+yG:DB)b<& Dv˃[TV7k0krQԸ Q:E-X*Fa*h:do)Fֿe *DIF#u `Ab8Vܞ .J%]Dիeʓ Œ[G{r =UMx(c)e+m1 T9܋v&tθ#1  չMOd"1evlein^Y&?'/ XkD=%kM">ERCI;QPx/KsBwu^셬Q$KHjX- EiILrttQڑ,@W*J@T܇vֺ}61,z |~Vo'46)$Hj,Ʊ<[(r̜A#ys@ 狶9p 8T8= 9jS9dA#8==,Ow3nIA) I伧s `D (A;܌xI) Ld}AkOIbZ>T)A ЂBL4*1;Unr@ZXzFꖑb`^2*c6wCRK gvڶaZS6W"Gc #2Őj 3g&śTrKޏ3n"O mpEI;Y)$DB=g:Ʉd xC[VG% Ij8qI6r%d޵޻h4Ki[O)`D[K C $U}׼^؝^|Hf$8;=줍ZoR ?bzל;ڝ9[fͫٚ>{}t?pv1^ .9 r8۟k{P[mi0Z.!^+FҬ #FF\,o3-F U'xtxXãtVQ\5ꪹ*|zA > E|WxvIq ǸSwjyOsp4{=H}ގOo7?߾zݫ޾ ߽wo߼SOSJ|0$ߛߋ u54Z#l0]zq*ô<[[ϧ^,3⮵N&k*tUA^l~4K; Ҽ^ (TB q7A`]z9Z4qt~)MQ %MVũf\@uN+.nh/Uc @g&Q 껕0i<> S?|J]еI ybM@(.X/,h2'y^h#nYV_XH>Aaebm8C--֟TIIQ$/>lC %VlO|!evre;2ez+25(Eh}o*i}Lv,HQkH=YDXn &P E !b)nӟF2Հ “AA} RAұŮӟuEJǵ[T~52a+q=YZa ӣP댁}:0DmI5i)n0¿DTu'٢!E`; dRBLYi@/+n@,Z'PqQX1V#eE]_Pl Eݚb>qĵ}VUv~޼8Pz&Ofuíͬy9G[:]%0_ͶCnpE&ͻ]z>o|{see/\N5-ޮ8&{~,s-]Ůʬׄ,b¬0͝fd9ћH``VVR"na.T/#ƒ BmC O Ř d,Q`P!*9J#cڊ{n\G1-B#gvWȖLIVl7$JA/LIL 4 #?X%pI%ω*H1E*"~*Fތ9ynX6Fs#OK=,m|Ց.돦_$~XӓE{5}?lo-_<[QN.7L00OoM39WM7p8s +鳮\Cu{y񧟯]s %഼Nr!υ~ڼvx:uPrf Ot?hcZC|W?sn879ffw憕ι'w;Ʌi$O[1L|H{I|)\VsH./\;MwL ԛ/lšmz;ܾnPˮMZ 6걙 Kb0|'/[}rȱqOIf;N--݌k0vE+݌k2x\ ǣj)ZCUҙlU:/shrJLhg쁽q:=X;H<P"^QD.JB|1#Y:U^(}4z<n[Hp{w |zm6 ׈Rk*u4M&Bnk]"O;+r[HUǽ4 s *ޟtW C'X[yˀNm n4.ON5ڛv<վT1EͿ\SxRk|WvZy[ {a}s+vlRS^/^;EnWfn*\7xu-[;K+wO"8|p?y877矯<5f[p%Α 1}R ;6Ei'E,2j#j\&ɠ 97uX-޹~iS 7g<88s)/쩸-&+`j:m(2$+A2wu4=r^I죕BAEB>@!ƬRKbT&^9Nɖ4ny3ZN?im7Zu{Mwuvp{PqΗjeN9*mle!U%$Œ")!I+kB6`2V7cVNcntLHR$5ٚ0pf>+T-lG%YZܻAjUXhRޤ2CdK #-&X\L br _m kpφ+Jmᜐl ^Dl?Ä[f] Ð[κJq5i}GdI#dƟ)k)7gM-Z+C Zf<' JI 2cEމj \2%dr?ÄfJz3&/bv7K 2M)GYGW(s"_,HM"YA-YY=&&d5 #Ț,%C|]ԚER-@@"Ӭ'iZNI;'搔] ^igd U42fњjB5V`ά*(HPjsAuQVm SH*,eħ@46Zy=gc2){Kx%f ,8uT]cE^h (%3P|M4M%.dp/SLsk#ޤ8g,q]AC\ZB4&-}5XlIՌ9&G4..`TLf[>kP s1 `eh%6WTL#ؒqW: 5ŸHٞV̓j% R嬅.pWO K`)&&vl60$}Oc{F~R;H"z]|%bxD P K ɸz)P4VH(,@WR5DH2;1*kZ/eGL4!v(c)y=J"llK@22@}+XEeҾJ L#+%po.b:# D^AA\Rי%a= 0|\ԺnXybW&[+lbոﮨ $!)beIلv|i`^ZUKZ44 i ժUV+ QV +\5$Vc+Ŷk4aȀg:c_N֭YcVɣ8i`RT"_H !lB-| 9 2gdt켜.7ϑ3gs/PA&'D="߲i]: lSGB$a1%T4%@}&2]ധhR0n,J7AyA3 vX _XP0@JPE"&LԴz(}\TT U:?o,YQ<{m N0ɠթM< oĭ 2i`u+*@JTUB"m dNJ"8dxoKr{sԚæ`QED_b =MUG]J{^t9 (T¦ җU,U\k)d2=]Gm`u B44-+ RT[YMY"ZzBJw aK؀$E}׌ieah8̭h&kژw/g'0:,ˇ^1mk3 @Y#W@7t0٬ B-܆s``خ;HQfk.֐T{r$JV  rI#WJvD|٥PHhUL _#!:̧.no:L\NPklX~jOW4<3!zzG(~իq=|7o7B@"Ȱ$Mnekj/_9/_n~4ӒlKR^.XӲ&6~u;ӻM5b}/f}[?Zt! fx#>aSF.B{3sNjo{^؋)/ϏnD[sǻ{CtzAlW3Zb?y<0|_0| \^PPPPPPPPPPPPPPPPPPPPPPPu)כ0(a. ~,f]ެdP x\?zwmjQ9?=Hg=FSt&5%J\%jl>_UA[)wU{ /,Ohus(M1|^}f5&Y{o}?[ 3a-Iۭ>%?W[ z 8/8WE,J ߊ\qċOVxm+Ch/Ǔw@V->1A=_Onɳ'7ƓujJsn ▎ 0U wſo~4yѓ[-\-4>:d.|2 *G[?܂O`wd-^ѿ̆suc}4ゅ8:*o_˷8oWZ u={q c#U/ ÉmF_%rI.IiEC#QdK$ vkfUwWs-m+C >qiShBm69ڗR˴) ;(N.@ݔ:OG8vsy6OLV$zef?*n:AI:Z>>?בiYrAL6/\S 4!:Ş<% 6pX?Ѣxcu-ΆkL/ޙzA|'& b \4*<|}~blg×<(kB 堸В>{K&`2Zm3tB)B*('libnS6"\ (!őhv)ҁpӐNX-jg% Y1qƮ3kutq簥mL:_ 3TD'1 [FUo\t (Y&PHCz ?}`.r\{)`mVIFtPpiK{ibP1Uⷧp7+w\]J = γ(>`h'"P qq86:Tgy{dh=NY18 (ZɾCacg v&( أ@2 5 axԈS. ds*S mޥ-&B+A&BK<}.{W&f,u8?z+#[,niȭLՑ[WGNqlXhڇ*^p`yNGs36'99wBXjbZ^{_-p,'Y}Z RNEozE2ӓ'= *$w.Cfp;_DJ&+)Y%%dURJJVI*)Y%%dURJJVI*)Y%%dURJJVI*)Y%%dURJJVI*)Y%%dURJJVI*)Y%%dUR\ZozE락RVku~QPw{MM.|JD"ycL!ި'|D"JbJ1"Xߐnߣ)ܑHݷl2.[a;`(ךj1ܱVC)s]0.4$< nJ<Ӽ8eKxY&L5Mazf\m~\M$8Mp3t[{?9&[6ӠlMJZ}qd+6[ ~ *ɼE R똎q -?Bsp'υAὼ(D.%K)}LѐyD4F9UT;9$F 'DOuJ_)}њ~cJUfvԛ\w' xn2|\|'#ͻBAL;HC_BY0 AbCy ![4Q3;IPg~)E7id3Føomδ +nn=† d5pg|כb r_hj2a7[S)~p5jGcdUcVc;ʻM+선pX듁 'OgLU(BaUXNTH>vw]_19!wdK1?w}ൿRJSՋqWlW^aưI'<Z}x.^0h W7noWnedgoMٖ-;+ef +Ȗ6S7o;>Ikq~0h\]S^hd1l@;K;"#1ƫ0]^Z~s9R\UЇgvf㽟xyceЫhOqRJ~\z  vBں7>L;j NCkw$§쯣<:uDuP]wFvөҝZtFZeV:|%vdc|"y6LPѽhVNyƊs}6 A#cr ƆREAjNpǭT|tP1qn>}~b'DWn}5򲮶}~b×nV /B۴ySe^0'Zm3tB)B* < 0gK1Y)bν 2Q #UnرfԽFNX-jg`Ŗ)ʨxH4WJSCuT`1&9IǨ.2l +lglp簥mL:"$g{"gx_8-_IPXr3ɢZ%)S@ZyF†K _ KCZŠ '==eC؟} U"8Ƀ%Xx2GgQ#|r8HnN(8(LxOɊYtbop%qCpG#25B( 9.||'"B*8i]R,l3.Wk+"]M"S]O6>0-̃&9UIPA'NX%)x'wX&CLUͣyb4dW됅C~WX迵 )9-NlreNHd6eABz=z&ÞlB݊ͺ?e)x.q`(. %,F"5CbutcM%g4]xg'yʳ5+u2.I '$)#IJ K焣a!EB/q4'y { |♽Ih'KrHhF9Vs #l; h<È7AIl]BIɼf#ndBNE̩8n+A>|\;>'QM2G() anp@yRQGu"06Nx=@OKdfGk~3`k=XmΩwB|C=UR$8R"FR)k*Z4v2z5AvE(!po fTQĨ8<h'd}D8 u 2Z`:Z2/;u+{#.'m_Vs&W.orWEQue\,:΢kEYgQYueEYgQY593}Bdi 0Sd@\婐ei=v,%7VyIP.D-l d7pN](kw.}vOl`|%JER)C,+̣ůnQ'6(6j:&oKҕyl4Kz9nEtߍwK4g#FﻯT,:ZH#î,sLdWӔ4I@g7Ȣ#9PSg+0\sD26die$v] vkGjw׽KVZ?"MZ5!GܮO3WzF͋9ۛfI׫ǓO;BBgwW7nzX5k2Vjk*:׮. bK^=Q+hkmXJx pf<4$09Le  PH7D˨@EG8`ׁ:墫^pV xM8kQXTKg2]4PtFBZGKpqV'ǽڽ'=Akuyϳ{Af}\{+Hol(o~zB?]4W}"Б _D o!p{U:#-c58)`vÿ>0f; gBZ`H+YRY"5_/ ATTcټnUB#lT' @4@!(Ѡ|j"Eyg&H.jpds*HqcQ?2%k暆~%a5uڝhР=·_z z4ob>ĴFW;>喙ƻ R&xM%! .*w ;l+ZDuQrlj Q:E-ioPsq.kwLpU\Cy|/Sd"d4+/g6h)ȹ/.2MNҗKt$ K $V,%gȆ24h6b4d%oacqN"l['F6wfRl1 yYN9^x AII`%l%鬢1E*:'0-`\I M|Ceҧ.X4&8Ib IJGQET&HYBW@Qd%7 nQ\Lnɘy2&rҞ*2 YiA&PD%,`) 59pmW}ղ]~lB[ 4`k'x yg 'M; >Ψ ^``&lfhC7zm9w!VRiH")XIg"O M޳lA>;CRdH{5ERǜB^TE]dJ ^oe{vFΆ`s!iH߁M_9N~Hk%m!ys5+ar9%8H t4˛0̖~=y%iYOd jSL)m4"HM L= jolPp-Rİs<ܝɖeGWQIeGaSKðdz/Z^*v-sk݈Q{h/oclP {eūLJxڛJl@'Os||&ywu,M/qy }sUFY疿ldD߸@=&hd=C~ds`.aRf{{Q&DtAH' UΚ.*Y;? 5%yS@--|m4V['rpɛ7KbvBcBbCƂKZ$J@5U˕1-d`QF9Πޑ|p+z3/ÑrCXC"_ЃI]MKD Ym8I Ugy4sɑ֍rW*8XEB:9)"Q CЎnFY&e2Ɓw/}I`-x}VJ&iPг9fNC),r6 H 6!mk!mʘnF?{qڶ1.OIZSٚ.9S{NΉ5e!FUg!џ>?ux _n-򌇻ޏn5g'JF$ޙ;pVk^N4|ٛ^<ݣ̟-QI%(Cbø$\3r7% /Qf*/}\'!Sј>c2: "/NW?j$ u߿8ZR9}g:pZ~tmN֣Yf_[w?gO~jpyq8*8/fN-1ѸV]q1~yzrqOh=1s$0wt5FaVLIx*֝|2>X՘GG&9G]LkԮ*jz@ fNSrG|_dvqKqnwj'5h<{/?XÇן>//?x.(Ղ$v} - E/&>1mJQZi~?_\:]lUU8OlYxt]T&UȊrIU:ͭ A.4["kmhێob I)#)i;pq߯@{ v{?t@T 0BEN›[6l?u:L?JyF]%J+2DH1b@f .Ep٭A?hyT7D-lX*Y|q>5eG^[‹QUJ`GC`3XU@WR"sJ lZj{NeT5{ʹ37P\G mI5i)n0_w"*NJv`fyA ;TvkH(\4P,,kP3 sQCq`5Z]6z2s63%=Ӧmp|޼r_$SМ w]{=S[7ܺmZs}"K/V>?{WGr⧁E="Y?&ƁbY#dLkđZv?ژUȡx$ Pd~A &`x>`@}FM3YNq㤁,Q ^j2T98 +T,r)BN˹~D وQ+|/+ɰDaj vЈ~yv^-l{ʜT (srY !cwсxd$N =|oD2 R͌҂TB;8L^$.ܫ9(T u݅"ǛJp0r]]vE`tiJ8<0΂K]^ĐAH} z~uGzWٳCCQv1(eww3d%} =ξwW!qϽ&]N"ːe6D ,PH?+%U`FkOh(Y BM&~BhWYy+Nå``,RQ{uDbyV={I ƀMȊbNy\f 6]ݰ!I/IG E죈)VMrNr!sWiVDJgexqԖaM˻A)Uz=ZIb%7m[kЕpg=shOC9~>&g{S!SLeڔν326Ѻ+d/ObP/h~z_˶h9.i_4)YsQHg7AtϚ#;7 ?I=}M}.M~hT釢(- $gn݌I]ezh3:lͺ_(]zZ?]4\^-W- f_fjٶ{3{my3jyj)5|Zw?`\h^4MmQs7lgp.s˼CQT 55^a%H6z vlZ-؋$,F&%N{M㭔Vi#,hôu{FWZXHg `Y츍`L0 |l9HDJr)+9KQalWcgHnNA狵e>=n<%qٖvcvp8?lc6]-ǽL˘4ocv硊^cH5Y̝v|)cRNnKٯC*y\[ZIɃ)yPI{oowZa"r3fOG*έmHYrQlkխRXfbi%ԺH0!1%c@Tg SPYns g־yr '4GX-lOmy%{w0[{ǒa)0ϭth-68 LGN{HuCE~=*ȴSƳSS^fpF-zeI&L @&r('ϬݵSH٤7$~!OC]F]KLĜƇR17\ KO< x.Hr{F00=&=b@KgKD& #t11%Yg! NG+X[Pƪz~?~}VqnqMrmp~|.~6w$_ ^@~^sAZdIeT^hTsMG>DE`3ҕ0d1@ZRi˒6.p-@$Smz {L7z? #CKC.dipz1pI-SB{pdF</1 VK:CY >(AܖUnȝWq >g= hMя/((G:7_!2e:naPǹ'+`\ߒq | rpF$^ьL'YH?DF9JR{**ØYcҚHC9Z<)$K,>1խ7gbmup_vVV쾨Eh0d+pfNZkKG?R ;spS:޾K6M6MUO+C^A--ԯN 7Zi3^( xn?i5\N̏չ[F%F%Ƒ傒S(@"(B5W3di.kQ)vטtѤP<2y] ͢v)/[|k[ʐ܈C;oM4mh}DOSEl,6!ܖW6z9V2csD\ƚN}~>G+n+oUp,:(Qͣub^"> 5wRVa֓cu,1;;oVbdvu Q{{Wdc0sh~ ;RM^<G1@zńe"1`ZKRp%Er:亩\pQ)& oxr(*]w'g05ef+;R3Б. 9NO9 #:ausW xpkIPy@t  TZ @q|fmȡ\:H@1!Ar'0L ),zq@(B]!I^bOub}c*Foo'"M> ZrSkFܮt9d#31ˉCdr̆(ĒJ q~JR`hcI cXb 5 T%o(aLi| 1* G\.%D ;!+1`l2wXٯ3A@ mhn،$ܗ# jx"Q[˔RYB>>\...JͣIpF 2,/g%<+Qny^,?ݏpeף$Vy֬…Mв,;CxrhsӞ0C⯧ޝHh70af aQRm&{G N%+GAܸ}bGw(^] }GdD}!6?G[ AU|r[ X^yˊ_n'qYj(b U->lUǧ'WmX|U/6$QXƣo8"yq̥P>XKI}RrGZX{Ӌjd7^J3OCCAs짩Sl,վ{CڜУ 8zҕItG Iz)<-;ίO/Wi¬R?90 pMw]9oN{e <0K]:Ige2pC9]Ъ.xG5GN]O0Q9=9uoQkَIg\O ΅;:;=՛؉uH,#L\o!]9m,8n,3λF!9np^ĭVy8򐸊.rNgThIAvJ0LZڍjMNU+ApbVs݉z{MJxVr e>=#A ZYׇUG«R2anuqOtALi Yx]'+|v~Va+^=nH͑vMv7+nԕy5۞, \NΆ Ai$');\0't@0)Qࢷ>[<|;m.KMM3aɄQ|+TՄY#d\U39dk)7jTssoh}^iމ{J 0opU9yzae֘uܓ,0_x"&It.h,8) ƺQ*+1#r!X`h@"o R* r| XvZ:7j8|BI Q.q]/r=ĆXz\FIw[n"2L,b^+6)C@6gX+a9.83p⃙5eq=et#{Arpy c7Od Gh)G~20t4a?S·dJͿDW>K '&ІqQ)rwMq_Nm]8#+(0z47{]Λ7ӳ׫bvI̹Al]튣i}0fym;GuH뇙s1p(ZYFBO&vT6QUhc:) mƱ|WK~{ -9J1K5_f_nb⌶~w}]~o{{…=y'o|CZOS֒&᧽?~Ц4bhia%9o395fyv֠ǥnmHw/?~3L'$&~Z ?m֜a9KftfG^EKnRev[yjE% VJL6ﭬح⛘fsv/-pq6O='-C Whdb)2Z^_6̥>j̢Cd,#3h 295d:@Njr5'vm%<4_l\mtv1dFNf yG̝(M֡^*Y4F^v4 {=f*WeݯRtٮd:fZu|,  'qjֺhnW/fW/Jc7~2R?hi  ?Jǫ1Gu.tR] N/Z4DYL7`$Y4_ok8/2B.L Fx$Fj#ݻC^?;;h5[eEG siw5޼jW|oNj~p1E-.x8 cfJr $zeG=ڍOT~~p9׌W_FIeNRǽAk H=IZ+!逊<9;U7N{C:xjl( J%BQWf1&(%68ƜF՜SȎ迕⻕jysCUFI- 4q%9\*l:d|V-~!6*>Ę0!g.Fm"@F%d0i>"KXڜ$w 2u,P~?/.ӏm@0 6j% H۔5Y+ŴAH+@ށvƵfAjP+N$Mg'=.~;=aT LE )%? &IR8dB *"8*kڞx&Y{!*R\d쵘_bj-2A:62)i d ֕ș)g0vV#B|B/zr(Wm|2-jn\ &&8/'<s! dtVr$Go}c ƚ8t(Qm2w z9)5EH>y-(s.6qb4)g/%t  Y0dt# xhXAGkW}TwЩRݍ6@V# dd2rL02L*]uRQ:'s62`100kg:T#x:T8Niě ̋4^^;?>{r)NG~#A ?UYQʹݡ JZ#4cM(,5(h+t>`HW{/%VyX=$h] >\\>o3KV' pޔ+:7׻iZVgo:/ֽٻe~80\^*UxwaP7P+ً ZQ*mTz$H\C;'=NfYWcUA4ק0Vy @d!R"D0Tk7EWUpG F9Q2Z[ SxQ !t\Aɂc\+52g퉳gWZP9Xg?JF2CAze[Ey[h=L=#pcrF .\f24"t|M)stYi'*Ϋsu@o[fu`0~bUe!GBZF #4$dU^H 3M ^Eɼn@+x}tE A}S>ک>TDqmiJE,Qs2L`0WQD9YtrV%Z$tsMB?p*cLRާ?~V9 i|$W>WվJq=y< o৅'BJo冄8yg?;y#KFpeg. tͮ􊒃._bv"rYN)usû.e?Y\ω=vﻷH kwӳ3[unM5vAh˄dE,}[XdFF'4Cd.5㽐BEiT7):rQD9=J*.!(lFύD*GV;}9;q3h4Һ,uqMZβ|,P"2dzrUL+Z`{PT{+b |+2v&~׻[w>^ϭh͐ =,;?_~~x4=J]"}ϨscXc%MH2>f~5iN۽7F 900v{^̇h݁pE̹>/`gG"smHB\3~7"$ÞF.E*$D!)lKP8m 9TW-PH ` #,03/L./MFu-`_(x0q5 ggNȁP$6te^Bs9#$ڊ~[Qm-2z|[6׋Ikt=d>]?8IL4XLHw)a^v{1An5n)OYnX_f~z[ɻ"?d֕4y;͊ I᥄񮇄Vc O0$tH^Ҩ  U,O4J|9R;VZO:vލN9#Z{\ST \WV=(Ƙ-W$XbpEr-W:H%W'+6 \`\Zź+RH_:\ + W(XY PDWVwWW'+f%H*W$WRpEjB)Ji0  \Z+RiDĕRiWP\i(g@q*9SĕaRrSj?5S-MծىK8<.h 뇧wq'V)#,h'4AIa_%ń(J'} R~)ր5%`'L1"V:5"N:A\9#eQwIc g^+R:e*5@\^V \ډK|kB4:ϻW"I'RىK ;l|r'.ux} ;Rl m'eFapVT V[jzL[Pg t@\cNS]{\ YWz*P\rpEjkR2z\ ZpE+W(׀(WVTJq%JUU Pc"]Yp҆+= W$WjWN t VKD3Hr9+RMqE*LPga7fGtifh]ɠ̎qS3C[[jHwqsVYNߜjA`~Ra)SYͩJܜjw,Y,K$.&;ɵP~N¨ҁC  6 #V+T LvWD㗃+c՛cVcqqNTvmaWς+SHi'+eSkuqE*quҚsVH.'"^Zy\JMĕaKuirbC\? lAr~qWʖ&vT.g?/1]\ź˧؎oL(S0us1ۺ"eW/AY!gXK%[f:dxd.YdBG[㇪r}Fk|$3VՂ`]Iӣ[()` ;Rw䖃+T +R)*W(Xr&N\QLoU|&՗+cջ3w8Z #="J*8RoN;]+Zڷi9j k']NKXYZY{yv/<-ostߡPGO^dƂ*,Eb)2Vh?;uF2|2[-a={BBʺRp=/_ӷzMzd]x믿!LgMǵ$;XW5ƯolƔSy|h!}6I>Cw9lp]]k ^Kkjhue~"b>91ꂨnmoׯ_#Vq}j?Ve vt*/Ѯ }s}+{}ꃄWTSDj:}=u7B>sW:}Ǐ|s^7g^SM絩r]=}7+9^GQ:嶺> m|ubc7P 4n.||~MTX*'`@8c^Χx5[66k`Wa[S.oo_j6Kz\em忟O.нvA^a(us@R2X҄B)均==LEZ݂sy:usm.`MZY,G`,\H3O}HZY ! e2|δw`Iz@Q)s &H*<+ tS(jWxY$r`D@LPUIdUI<Irsy xEil[阙WƆ†x8tt[!26a5|%S:PTWa iQ|La[8| 2T+L+9U *(INIlc>![8]kԻrݫπ#vanC\>͵/atȽyz޷۝fAxc,"n pl`bxIE?RԅmGlR&~8IyG7ڷY&+лrnF )(!_^[҅Vej脓9`QЦlA`Ȩa̫oL 8{0cji\gsJ<ҝs9OK* Y c%`)i9mp,d#7T*ꠉY[fs DHч@ b)9S*x'SA%c ;c*_IxeZߝWˋǸ-=_N\x|?~p iL>b {:*t>V5T]4)F)#RZyAV_}:lF3Kfg(4z@bV-e#,:B k%R0x`G=5!D=-'g=Sl?|v%cM! xͨd=lMI^7hs]׸ahoM1{/9ȕ%<_nM'ET~s_u M)l@E̔X aaƣ)B$V5ehK\Tl1g{Su#Mlo:L]kF&WxP4qy7chz5)pմw?5'g 9FTrdj(5*Sw(=fazx|z5eUK{zajul6ZžgoS1&mF ]M^J#FVby+xFQ#F {.FedX9R^0PVDDH YuZGP);S'!')8n62'3q)  J}_u\ڃzxӞ$0+[{@Z;|G7HnEm>ugR\m\ҍ.`t4Fo9t9f{aw,np*ڰ"e60QR-hz=:wc.wbe8\NJO׃iާdV;lۈ2Vl߃ n[ 5 psF, ZDi MH&ɂ.-M&X`T9F$MN&R/5eD.FQ4`ૌHgLdCcdbf+.fӯץR:Ŏgiabьn_` AFn<R8?;Ozq 1Q!R^!M <F嵋.E8{k2C/A0Pi ld[#2\)swl8VPH'ɛXo9&: +%N;D' fh1Jz2|GۃDJ11m28(ljvb4Z (dF~Hfr Awqᢢ6%FHM {qti>bAA.$:QRimJbBPBJpũ-8!F]|p.>5B'0s`1]Ro[)+H)tYL&PS3*W ik8zʢҩ6 - 4w)@b /SA7g!Cګbiq1H RG%_f. ygAnMwgSZq2&q:;/AN*aRV;rj~a2{h6Œ[搘=je;񾼞_֦P],|IYj']&&j{aQ,Y! ~)Sm]Mk]JsmdGm#6]eLՈuFnm`1}6-}J|P[o4ŷ: 7x x]r5KҮ/3eD5'A_i bw}Uӏ+d8Gnz~c| גLO+48;Vj}·Wx|EͅU1x Բ(宋Z|+5\ôJey -ltmIzeI{TQ9x9$4nJ5VK@%+ {G3G RtSiU&U{[q/.' rڲ1sOB=\kx }h}.vڱ= }816 )X vTLcF[K8 <c&hX)Ql=N;[p0CJf/h}^.tTrHn^_4zn1Vo}JPU-߶ڰtӗѪ ևU#W^ ͻi2&Orz ,W0"1QGGo6[ouxdvVnݞiwooͬ4X+{}= vomvUUȝ덙nwN^14v7c "M& ¼&BP5;LjnDh:vFtYpr>p ڿt?]Io3h{\"]spe3촌V2*U0cau rΛ-Wo3*MaGwoQiaŮSipgE*/ݏv{;jqwŠ:d%W aAB;E%-aS`Nr"3Q4Q0!S&aJ;f2Rʃp-A[npy :}%й瑬 $LFN=|z9sm4]sy-WʤGn2Z^-d[`QcAa0SK0dc+bN$0!F[-$Wh>(r&ך5yff1ݜQLFǤ"?JҌCC%pD0AP8@2qVA P~, *ZIn:=FD<7v!FQj"9D:8A8F!LW8jEY~=q=?jHPL L;)2IPG`t4JoFi=#e]֘H sE3(U_+ %bᕗ$"0bS8.ۜwL8y,qLmeGDS*ʈ {9YOͥ_`<0 (10kY4i:`ߪ?\'e)=&(bk.#H)D0("D.щ<|B(uQ?kVN[lJ<K ekؐ;.cDEa(ܝM~` ”%7o5R~}ST``!1:ݺp:)1F/6 /AW S&? n=0#Z~|(\[56 RKd$G'R}+" ϗ2L;7 gb|LmӐiRY%>L4?9:--Y >kȶYJT7kH%Зs_q"C|өTMWl4O~>?1QoA v`(VQQL~~20_35U\u]%92];Ԕ-uk@~|0Txݔ ?e֬RB:2?WU'[N'>Qb:V +wKJVbC;.&{tksGИzzfRA(TB#=0I&?yhd>F%qQ/vȁP ,kP|1@ 9⼎&"P¤`'A9gWs9;'CzsobZ[tf`W9Lϓ$(*{SR?\ʖz]E̐DၱXh V0bWelH\)jUh寘ԓ{ /pC$V! OPzJ!M:z,J`U;)L1F: lprVk/;^s%X)l'eC{u#t696Jݜ)VK@>u}9Cvݭʱ: qvhLB!R\q8A(cg"VZ_` p?{iOeo^t UPK~atԆaP`AIB] ܹ0bɈ,ʠ rOMPJQro%όN1wﴟLNjQJ֙vW]SVRJSOxQc1`pc@k-F9 #&; YNLj޾+#hfge️ۥv]wMY7k}qʄM$ _+ܒ^?߼Ӑ7eeHlΧ!Lt,<]pa~mJ_yX_ DZJ} Kg%|o\ HFJ6qh=>:h6y-i@B,eC-nzPdZ~My(?n8_&͋j.l2k`? *Y "]OZ^3<5%\Rtm۫Ճ_&PP,ǧ5L46TYb ,.fԓ:J/1P&N| T`Z 5i $IX`1Sry? ޗ`GDd"%Q2)QdR-8#1GQXKK T"Jye&1c^i9hR hc aJ'TA%Yih[)9Ev)+N1ٗq^4|GfqoCk*7D Q|RtбTJ;㭣l46U$ʬg95H$6; v@ݫmQ[BF2t]bA21AV;R V֠Ay#*g$^!.1- !sMh+Qfh%NVjl)g+p,?t37A1aBr\DA3y)L&m6']j)_;KE>2ۀalJՔ5Y+ŴAH+@K@;-aV5.3IoݮF!yTQ*>$I ZLRrApPUG VVB S XʤT,ɺ9",AEDR]Wh/"|XdWSﺟui`b`J(Bz'7`wE:qȮҡ+K~AЫbX )M{J<%T·@d]'$ qɍ )_>OଁTUHCIFyN7Zـg >etd{*u]nݨh Y 5*ȑpIL.#gYqDp1V&ɤTTT>J1ї;sU@@[wXZF&sJj],H}}y}؟NEʥߡ JY#4cM(,5(h+t>`HjsUUI=} >XDҬ H_pt[AvwZpTEog8bOFNFWKn]{?xikaizm!Th_5x6Oۤnm ꆡV"Nz@;4: R<5j;h->j^UȳIVT*#Xyr|rGKAvT/SmdQQb*&֖iTq%HH23WeJ:giY{⬍N@m3t*;S'yxsxW򵭢-,)Yu)S7ߓ5e s2k!SnJ%Sfw/ ez*>P޶@o {e2,jlu$e<JSBVeAJ" q6!*@*ͽp1F{݀V" qT}S3>کa{>h-M3qR`1Y  F*5Gt2˂@ժdnhV"U%{쁡,MiyaُC `'^ ͟11C.Lz7r Cn@;*:cڥ{'Ok|_N?W-]T*:},79 y[zY2"F+8tlvatSOΞBQ0k뒈}DŽ^uoј5?[@θIO?*T LNfY\2fEfdtB3Ifn5㽐BEiT:3:r1T9=J*\CQ،#-= Uvj<.kq `_g=`[պ/Pk:?P9d׽JgO{ D0 7AEV*Wb 깺l1,?z]=\q-'R{ Tn+ζWWsu2}z`\oU!W辨B-BTWoxp?[bEj#;ƫ͍6MY0)/ƨt F|nZ]s-ٯo47n2-XvKmܴžC+b) [zQNq<6{-N{7:1G?=țo^f) Ky$ן"/EݰSg$3kG-Tbo~L=lt*l^JZݶ};\ LRs_NFcEN.rWZ7-l-_Fɞt`?4-ruoZAסZi[ajɲ_z F]rMoUv޳,T4QW #uUv7 Z; +-wG?誐kd_Uܰ"*JƠ^҂+#zt xިB4}QWZUҨA]BueuR+pސ<Kvpo(zs `O `{ AZk9ペ~j$o;gؕP|Gݶ f6uVM7 ߇c lrea^Wv,P4 YjQDkWxjcܜb*25 DorǽAk 6!lV:ܶMtНkm:Tj]wn"KXڜ$w>eD|K/zoOiQ+a@:LVSbddT)3/%4@jP+']Rf𓎋?'f<F]F!yTQ*>$I ZLRrApPU%ί.T:1/S}~/pLzyZd/YV"/&8_1X@?82@E$-f菘ZL&R&%-!yr m""D _Zˮ_5UTeSKB<87'-0Ye%y!J1ї;sU@@[wژE~ц,*j$=ZtU(@HCIAP0KnCBe2NЙ J61z8Fl,\οH|w]|%v]zT]cཏ|R metv1dFb%(gDhc^*Y4F^8XURǔ JuX8QeP$HRhH&pj<.7krUz7KTH ӷxG"ӯoR Ȼ>[보v~w.VsnS>8DFߦmkE@7'gikdnklMuQvVRk -^Zydu!gЀ\&JU)Bu 6lĖPަ$; \u 7a=eJʞ{ 9W7 䆓[-H7~6ߦ% ^qf5H6'+SIVg\Sa]0L&` [#+E]퀴 }@sٟ>\՛mZ颅J7̀y ڬtnedU-0M,dMNH7??DPs/\̢ѐeZƋ.j[\:zTXJKP=YJ&-8`rY:KۺH{mmm{qrhK2eBzO>"bS̏)%fɁo*Y<؁~ryj ֪4P/ Ns3(䬊9bUjl 뚤.ר2Ɋ$wγjM~|6 rq'H{}eHn߯9Jӟ'c,]PZsJ"NeM)vT̎O =#`mKp1&Hll Ck3If{GyⶎjkD#1*W̑ E.*u?O6Zǫ$Q*b/߆mmK|U,^E[%l큝͚Dq_L6Δ(UQ*ⒷtJa5[y;p*l&llVq ( 5es9՛:4a}5z\zr©MP,tlTp%e3PPSL-AU".*k'\R"$E<@~2a*O1UpP1cXZW`Z$9J5؄|sK6U޴bC5[Vܳ4ovVLcZ.!>[fGx9?=Ng^m1؁Q KB0ȫ,t,HrAUJ)1 5%cF2Fűx>$FV}x̅,*9EAWUJL-*hRDrL|獒n9l qL;ݹ̯ƏT0jer(+ٗ! y6"gkM7x@#Gh+t̖I6h^Q 爂7"EBK=UZE);NN,;Jz=t:_WR_hKlv&l\VR˭idkiXתL,9 ѕ0`("m6]C+I%q %슌.ZJ%RMB(Հr@_j$]PMJMU@72vEqnX2NBT0XXx7l3)ޱR\~]M '}Ð./]^,o8bueJ[T>LAoؚcgd 9e" Z쥢!F74ggľUb;bV`#CtsiqUvB0^q*jC_FQ{f{OS1NU۸C)%l0jd(\{`ROjV"da`%FzN qEe ٧ V 0m1SAnq*"ΈgDqo䖪>":l:PWGO-88Vp]IU-#)pȊځ35RI@B.32C(֐'[Xy#,Uk=HB2L^&-1y O)_6?~FWZb8[kMMW_F#^5+ s4T d@O2d~ň?zF IwdwA(7 ; xsMȅaø5H1rTIjRe EU9 f( ƟeQAt*|1)eH/!>h'P%o9T"*7+iH%wƿ/TLkm ڕVYB&YqʅwU\kU x?Q JVSmn{GV/m8_]\Gz|0xd&e(|RfLpjyizMDÃ>>L&t^AGdOy8xƓ'xͩ_/;eS6zjlxz/V_ܾDo|cAgM9['s< gB;YF emOy^f}`)<30ևVPxLiLeT[0(8z¾n{oa>`Jt9x+h>X6;XOW˟Om,%RCmvw>ܺv<{Ip}syC2C$؞]!ܻrnZl1rm&a^{:n:\5dSWcsz|?u}3sۈ^Ӭ8"њjxRSBd_ɕDlhe&6M.B}'|gUFe6GaI- lrd99VBJYpz>,d^//\\Vd8ZjB$j.*[%c7k?ct>>(S$){r/JWkұ@herR `cU mXM e\EԊ3wr^9L ߀Y5} [W! 6:Pd@p2<箴ޕ>i|Ϻ+XCzSWx 3 O`LV1XNs AS)su6{L\5ȢlqDQ64zVX/g - k^yjm8GNwv>Oבh#pQd旋O7GªEƻݯXS_)F^˽'Y| 'l;DsuΈM.\,O@ >~7M|X`GyJ3T0ҍ xm$w?n )&8g-|N>_OZ2KtLdn9GQsO{o|[~շoGW:B \p = >+^M_EEEKgߤ\g;|'46q\fIno&X7&~ZlAj4<-|)_$+I:ܣ>Aכ,r*B,4vB$d'ു"0Wι` ytRL2(M:VAxkJ3~z4mm[܉O 8T(}oo7RrquO6\Z BcYg2]A³T-/Մo˂ϧ٧6c {C4t9;$yZb{P ~eL1.dI k8k58P5̳yxyOf&>M 7pz=w26oe<_,Ƶ4вUGm/y0O.O,:jqut\w..g9|7QpE4//Fѧl umoĥRuf1_NDg QjL@U<:k7xmLtM+ngPukz==hfIfNhLf׸m}>0_,5$;"]{2z.r0NxWGҮyV_)W<=,4]ꗦMWw4clLx?ػƇFnį3ȥ&+4L%riCJZO%дJ|SB:GC+Aq ]UR*Zmw*JtJP*`kCW5[UEدNW{ztE I]0Z͆BW@qNWD=]=ERT]Ybt°=]=AJA `]YftUUEi̞ ]T+3WˆߍoMyܞp8z&y *hr>}QooV0u%訖2ٝ&y>:x|vҾoʴ7/7J!aeU2Y8s*c̍ n\!&L ر^ EVreoEOQZDW =ѕԮUE)잮 ]9- iR5 H3X>t}8]a{Еذӊe{?+סZu^(W#+q{ڶy= ͐5p% *Zw*J'tJĐԕ016MꧡUEv^]Uv"]I]0C]ҙ=]=A"c`CWkBj骢TnOWOes)CW. &hw?(+mʀ Vf0 hNWd{zte8+sW;;Jw:IAkw=-T\K7p MWZ:MWf?g$h]O\صIYi䞩VZi3<z|rRk &$Lia漷 $-әor3?9!C*.N斱a*7'l5miM [}Tmy |#ߎDc@w{b8{{م6׆t`$c)׿kjSϗ=f S\r[¸ūvki|-fCm*J +w7uEt4vV<=vMk:stN!|DY+o(bR([2V&xHK2RUsK֚ZrOm&Ck$ahg9, x1gj$pkQ<9^trm P +)RSAqϔ$`ērVi][oS0N,Yrp4!QJ5>B1IQ$nLaIm;n+!QI21ߊLX}#֭hѴV:'龆ÖJQ* 6 %QTddRِSό0&aZSAHZa0Ú1ڡmdSȲu%6 hc&G>ɩ<tA0Cuנ UD$$I$X^ 07F[bFGcT)S)5+ƓѨVTqh V"6hD9hK>|#QIԴD XyK!:Kkۜ7;!:"7,BZ$}.F)c$z۬HgY$W^0Q1ɇqUк8x$oS[1/=Z|!%2߅!T8'XCQ $5ka!.R`TY,T=Sd c:b^GD-6gSSh|AiQnYlеh`,1Wg 98' W#FaJ; 'ƟS{0%4lg3`xry~gEպ.b*(RQo,XmSka$Cg /u3Tz ꬤQ0tT jT ccB4| /,BК<4N"DiyE(C`$ (gLt,Q/ OVJX}1SN.m0Z66֠dy^ۭN0:M<μ^I^ܶ2%&P`ڠ .ZHrx3  *S >Y ,z꣨Zu\A=ǒUFCk'n0{bwohx5I x0rH 9P.34v8 F:)E5XzT.`pz[K ]*Q+$OeV+HU; WYv'SVc4kM(OV0 .l Ep # YiJF\TŴzgZ:@!@2 ?{Lڰ bRGDzS43707kUeR @r;lW` "1ֆe[ɱ%a!S}}p*\AUqӆ D 6P7K‚-+@-0֋A+X4Sqc9 |'Kq|P)j2zJ &`YPBh℞TV/^bDݮ K(R-0 4|:W1t[U߮H;lkkm˘3ՀSj"O2tLr|I04s(lUNMLDN9&3_~43BFT%o,XSSn:Pozjn>4YBeh t'iG~^3?p6-zutFK3Ny/>^:IŬ8(lpa^3S|RV8\ټ8owgy[9t:;KTA֟q9/FW|:vu,F]]); f~TSUI y}Dj:fֈg@/RY 7:- qܜr|Q]_nלY.˫G^(]^4ҫGȵZc>d4!4Ed@RVMV_NP'N6!=[րFy6`R(eѳmmKjKR[ڒԖ$%-ImIjKR[ڒԖ$%-ImIjKR[ڒԖ$%-ImIjKR[ڒԖ$%-ImIjKR[ڒԖ$%-ImIjKR[ڒԖ$U[fԖF|S[\aFm k{jKScؤ|94'FJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@zJ 1I 0(`.G>{%T@/P dR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)^)~DJjaGZ~,Jj-|J Xi)^HpcZR@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)^vj=`FuRSMk\'߾zǜ|w?"!LG.xK0.Zɞp V*I¥ \NB#rW0h̵=wb֐nܕxj5sӫ(&_6+1A%ˎUd+Ī~L:VYZ#TG5uKXg}ftuh[Ji_IOn~h\l~(mg!/<7 qlm@yknB ui\mf:G:׽;ߵO︸y`,9oo)~>_w|@C|nl ޝfw5W~ l<7ӓRE:Yºߑe-S.QU+UM5p rgn竍1[}bsj~O X*vZx!mXZ|9M+l52*W/'›/_CC_˹UV|nVLoU;^ۗ]櫎fۉaϕ''="i*:8ΛaE_@݃b1zV{5 0 ?+E,^FX8~ 2,eҲ#*tyYǽFs?Iai ^Nke#t3~>| +}=ϫw{Qvͻ}q궄%4N`>WMlTd :*2W"l1|uJb`ǎ& s;j-w=[b/1Ǯa;?mf- E+ zXFe/ oZ΄e(6Ujl3+ݶDX(%>wY)ntCƆ8i`OELlruwZ\ۢs^.Ww:5 ipM&;ٶ6Խs2kC笔4uWZ]T6%@wモxv_.f|r߾e#gP O6np~Kv.`0AjCMR7 76ΰ -1;̛LpC/z=-Ó;[-)'!u< b=g-# zû/FҪzXIzć!fK0R~}w87a< .>G]59Cvk(p^M56gYaSt ^bѕV٤k|E/_ KxS]pw׻GsNg??;b5ﭒ77_x2怒?fVW~m^7צg_=oo0Du֕X74W_ݺ2Vb{%m=3`U;f}w5ok.לk{_N_Y&,f!\^v+Ay'sC̍qsg>1~㙺%h&9[ju, e93>Yau-18^.'SgVgsj>1fVyo#, >6FArIʒs-&xYOE✚#s`db|G}-J)D,D K1/ U2nME][<߿/]HIWUnem=)Y /9Q6Xbi.yX19r O !8NXX z.T4MwhN`novT?K*jV"~i~z3`UmUik`sPT*tx.vfweAᐄ?iYݐ`_JyFoؗ*ާIdej4+Swe5] wƯlRYIDv4A4^pO:\gϗäW2U*}?Y3lXeW#dPK#5[X,uL'؉^eFjɅ0Me6M93ZtY 03_@x.'i5N[”5d.1Zlerͭ(gj{8_!e_ = 8kmE*|~Q4IdK1ٚ{zz3Ǥ:J@'W{j7k[mehvqHg.d%UE]\xkZ%GDO*2q.^*{R'yl 4?R|ߢh0^ QdllnݜU"բ["!x?O`]v;;Ʒee==WtbvNemo]>Fwqow1JeB4dB%/){.NzeJ1i%!Tc$|;{髾lC> v.0J '0}_dۿ6P%8xolvQ!&zM'=†Ƈw갣;N҂bw)m0zv=w2"[k\H`j7K,HB$-zOQ`u,8벊2Aj'< r!X) .J$[2gIU~aPm:[N}~@ =d^̤Oy_]:&z"bCr_=[Xn!";ʭo-wڀpHQ$.HrŕKY'Eȵ$AE#HB/u:jDk ͔d0>yG&MZfP K$yKC-I4R |=+ &n8i}tج;s?O /i#.UV%oJDǽ(/4I4VEY(1Q_?Ͼ δL򗖪g^dmoəMYz;rJp N vtb$y0Y\Ƥώp0$(CyVd7Q[`*2xkZ>qE8ܼC !~l#uu"dJ/gMD/^$L5X/GU4Wݚi>KpEL^@v(w랦W&0#چ~yݞj|:=jg's>y?97k(6jF_|q8|[3ҔaHJl0b0үfq!AL87\WV`@hS} ^ CZCxS[ -Yͻ.6㊚f/gJӎ:}V|c0zVԾi)VsʇuP^}7)j_o˳r(糂ԪGb7 PƼyh6uCGɾ`;w̑TD#k#˺Lz)3Y'K- ýGf:i!FY3 3L*q|fw(@@9dFR۽ةb砊_(r~NP9 7g1-tfZa)qYI@AVg#_,Fi}ڟ/MR%/JG^ћLKѧC~v'9"+D .Mkvf)JpLr0r%Qzj+g5ym]$#=4/:tHZ0bh5![%Ko⬿vbOy,ܶ9anEɡ/[ݛ|W'lك]3h|_Wܛ]M^B2={nԜQvcT/.@dzS.g6d<_˫QDPVHLV4XM Q4,hEUJ-ݥiN9wڳ}UbwVjplW B2h3f-gYs΄QxijĨ/G=>`u}VdRwA1,/)-kWU}e-tH;ڲlFSM B%%bX5٠9Ii+lMɤMnDF;*\cCcWA/iǹLj=`Y!! Ĕgh4"3l!':$:BrN"-U-~IzXyXc{}[=g{'(vo9s%J͔6xbɱhx.rw {3k){uT*1BDo|]1T] )R%VσtQUl6mRT+†$bEPpzwwPy,YZL 64,&&2 (šB".ӿ54fU#$Q&GPJ]y!l_`4Vk:_ǟERca/J&j s02 RL4.'=qFNj%cOzx(F iJl6Z3c CGַYAjP ƃCRvI:ERL" )uDt@$O2m3!JMNqTQUWmO<`oq.d2* HdHG&F6%#]Y$rDY0NEDhn=:᫶Ev>]Yfd[7Bf<*N U1e %@pz1c _\tT!J,{ ^a0x_D4%P2KHM&لPjDS? ipfp4>]R T3R,bl@(RV 8'F/T]>U ~s -'ꕜ=Zi'geJҮ:5JD-ab":o;o@eIq"gJkS:ZF}ɕ)琽 9ޘ*u9;C#8W.?ۛm VЭ;Ԅ䚭h;6;tZ^X.3`VI2v AHA:d<u.ߓxx@CKyfbd jV%x%zr|y~>J1|u 5>EWt YygW:|Wġ \ibԇ@5-ʘ5QƜM,wP_Py*R(y29ʩ($ڔ>2O;suDW01hGLQq=WϊR6s ^GS&+jҡܨj?vb3<lofRAp2@֞k, .8VA!7w48Ȩц#ubb F$ {@k^0*X!tNqu2t*:l[,B.>ّh ,95_^kH.=miONk);h7hcXU+=&$șI@Q;t ٻ޶neW8{>nv~ف-ĖTN?KK|2m+ Ф^&\s :ٔdv14J͓TCFet1{=neL'1[}Gp:E$eXXOVW %'%EenE2^sy1mO?7krpv#JHĞ@ZsYaQROըD/ @/IsElnx EsKXR%Jbbf혱-rHmj"g;b(#6Em jw%tv7WMV&<ebR5$m9\g#V HC&iсӨcbAaB Qed]mp5ra/J5`<D?ED!b57.ǬIG'\[:$[g]/.V"y.n^{vЉ40( R%=RdF%H9y/y.>. VaR.ROaFk].ysFTsdF1zE?PeYdž5 FNggio;yޟ}|b?]^C/7}gI5)'!S : A  JA/eu%)9*DkpD͒MrEk4,H#5x4zi˵BrL&!rzft)"ԾFSg5 REbw,!-2) )aD!a$ AJ{)4( ȅ&769JIF$BJc5rv+tr8b2CH<ܻl-cIKG㙵8M|e҃BIk\k$S!1'Lv@'ޠ&#Sdh2)@E|ljWc,nlI"›I%/&Wb^tR40?TO}ѼU_VvÛA99Ph$\1YD02YF. 0jѼLVQae (~zjgٱXMge[ˆWűpH VDQz.csN5_q\\$TIQnYų,~| mnl&,dHe<헷uˉ|47':;+ׯv6m? z|fzpuAO>q  ,J+XA 5_%o,}ki6٫#ݫ=tK^4n:7o;SN@:{٘(M _/[u9=d5YWt<0 &k%\'kP`RF e'Jss,yg~Ƹ",7ztt>ȬLH$n:pRt@(5XYFA51Cg mVvZKuڠ&|6V`.#$e-V&?qǹfN(պΈ|?Gu"0).y!Yvwkh [m<3e4/=r$6HL irQPJgԡٗYG%&wTAR6"Bi!(j*1˳A'`Sp=r,Xj~vzc~3RJ3\5,M-=is%XXDNEo1ʦMn}SޅQaąb3efؐǙx߻LԬXlRӈ!AhŽNofܟ].ϏҙG9`= ϥMthrxKr-07,F'eQcP{7JCM\$*~Aׯ>( Bےe(Ԋ\,4yIgi?=vZwcNm޼yqя_^ެ gc1Kb_Q>ז]1VzyTN~sF=KckFkG 뇙;wfu#Fi4n0Vb9ǣBϮ/6MN"ZIQxw_054М՜u໌5fܻc55 0,ֆ8Gl3De`Sa \b/.~&6?;A^hh6UrIU8 !مGCu 6H7{뻳w%|BEIf+W'fM$]sY"lʽc%2BG,ãФ[Y~>pU똂@fUFNv6I y0D]ft<(Ǭccg..'pҞ3VNB%ǸwhI/oU]lbdŠg[\s-.J~ŅJ1X+z Y/oR\QIܷ` VBw{%02iՇ't"p+EQiQ=Eв>ȨITOa=IF7DJ66ğ&td1֭9n @(ڌe61pOFwQ1y "d4)UD][9պۀtHxEܯ 3F6J3'\u?ѝ.0^~Zit[vE{z8v )sV[͟L:Q> ,'|H8J=GWP%+c'u G>Q# R{UbY*ØY  u2~WU#D"KYńjЂ%BrTҡ%-)R sIpTat|XFEQcGicqq se~M;ϟ[ᇿ :Y1gL&vdFh$Ck . ǵTBEGgtw[y[ Qh4JÚxM+t`\ )s|7/ Tuf }&D3x.sDqx/i5 h; i>$mJ"׊=5hܹ$"ip1E,9J $P%IM?a:ʺo]j&%3fR$IRV#JGUY>FΎzm݊͡zx-Ydu{ uZ6Dn$m6R՟"m̎ /"uP>n_Z7wO9mtfK߼MԲO-uvvPz^iy?L -n5.{~7س|Kѥy Uihf!~5_45znz 0,y;Auo{~.Ç|lc@9j,W2T0r|\])U I9褀t1x`^gJN\^cbAHN9): K 7zSXó5" m#ABQmnٝJR / 6hξpld#9COM9R3w9!D!l2@,w@q"1qߟv_3\>uHx9#=))~Nӧ&p syG6u8ꑯS5=NS>Kbc~ĆjS+SZ`M(neZTʸO,׏C2;70 T+kZس MGdA͟hW[ׯ4W[)#:Ѷ>#HCʂ ;nt!$w-ؿ o巯OSt$j1%pfDx 0?v6|bڛ,kh.;ܟZvFmg|zvS{bEi.D԰tV} !wD+js].9 ^32QQ+K*,ĠsJ^ERW]:FفE}SUa][s+}Hby3/[E8ۙ\~%Dz[vʉnInc}T9Xj9\=mO6 µR3u 4+Ba#!fB|@\8{_5>xw@ u:~c't4y4}Bk26g2NLl'}[]t <5$\b2$Ľ $Ѱ؇lC4@NY)E\B\n sV+Rlce߬L#&`;L 2uzn;n;/R|]) ,˻ Znzϳ1 ^}&}1~6 s+h'i&,N)fO%BjΛ:^w{葟[Yp%ݔn4p_vlx示 =;%BoNȸc }{Yű#pqS4j 4 蘒.5e_seI\gAO3\?y]Bg a<21w1<1K a]Ją12C4g\rJ2![lieI P<)0M 3 6>3Nmɺ6D"%7R 4 rZ&;g5(#`wCL} \|lM|Ua*A΢ll&a`38/M%ߒIה}l|[誙a\~WAї] =l |Q&9qHꕅ#jb2`#5Nbdcu-W.F1X,26=)EP9 %jCL8<\m$tF`56Nlhk : aMWy"="D=HσnVsraͻ X5Q1d-]?SCq|aQ:J\1q̪oNwy#Ejւ+)WreѪXJ;e5 Jk`j^m\FDetRmt.7LEv5>l|ta|0j&Q# +h5xQHV\ w:CNZZ%b ]:iȮ[IO}ai^O֧\ڃuwW̭D,HXj㞩6E_9Ig'o?)ԓ WGn@J 0 F7a22ILӿg#Vh sN3OJ\}[ǣl|uK`QF-+.iihM#k &9S+EHn;"ˎRvj/N6PQ6d+cH󰎞04LM/4>8^ҝ~1pmfw'oMu'l875ع}\@gPpsc U:S@.gRVmjH+8!6I2YU@/@zݜ+U>3Vg7eʍl*V)gBTO LQ̈-X$T\1X>'~fqFt+?D+c)#X5{{ T7Xvڒ:ߋZL:2CH5QF|Ͷk CQ$ud$-#bLEbdjM OB $#HqT/SmUEWC/ipVqQ.4Ԏ+BLl.Xtkcp,88kƽĵ=}M );q_zrt\mDyXLPu{Lz6}) #F9傛s2'NM}_%1,EtxkI-@@o-.VOIO{u\9|헟pZ]Kɔl!~RMpk6k).VcB-k=s:L2ܤZi1Cl5@%.QlFO&|+zv|r||ֶMP:$5=X'+D3%bYnvҿ vQ'7~)6LfSb)KKXcUw]Ϝ]-";k'Y^(c,BiPOH6BeqƊuG> A"$MߴqT|RTXm.ִRkl)&'(R-`GQ$/'|<^vNT:gu?$¨֢XlPl9`T=PV(6I&ՍcdG(n0aЧN"g2jD U*RV=8L,2K u [[]a'1@Չd0Jk l %Q }`S|CCOe s$ Q-ekCM_2RCKzPKjTt{ h^RlWkŘNyr3%;ֵ6[ κX1"P*U4lCBaBmk:xg6Ӂw$t'(4oR9ϳՔ/O'~/!Xp%N8S%nWZ_B~vy՗L1XpMXɕL 06>Ml 1ehI*U3#Ԛj (rE:)9E)C$vI|(&.{F̤l5?ZCK*)M9yĨ_ujmlF F^-qԢJMm(j$z"!0q@8dMRTѳu$8az#&*A;s}:_8ZoO"o SZ?O^&g `U5B,єs2s a9zl]C =Km:GXh43 -frG~-Ge#Ĝ zm;k]`$vl8UHM& P 1P5DB5i)eؾIM,jТF6(db G IJb5-"ĹK2Ǯ{DX<RC]S+XIb*vRXsgw/xg49Zc|mYn>XWB~FWoj\ sgqsGӅq)9LKv8//.~.F;`E@$}Z!Y"F ~q.pvi?ܱp \?YcFޛ ܋m}^Q7A_T?>Sg ch*`8 eؗ*-?]JKTW8{T`hoU'\䮺]u)[+tWO{TpoUfJˆ| gwU@n9{p=[r:kr׃6u{~Ğ?0Я~U]O~o889>;?h^&k'K/)5ݯ'+Og"_YF)~:*g |p|Tz.5G6lzUZgXb~8Qwюx_ kx&XR}_|zE=ԕhզ2Zi Xvv4b7K2i)^̹T/2bǙ-d| `,/^G"<ɒ]aJv!`C–+sQ{n~^|ͫ&׉e4OAg.oC25!yRQZ;`AEd j*C%l5FO͹Ȋ3 g0ռˀ/u淮\4M f"ێیܷUWgOq5p'PS&'Gv|֟vk1HCɸS׉57ɖlB}|Me֔\4B-n -n -n  -n -n -n[pmw[-n pmw[pmwܽOJ2<'dRx*O]Wo_-}K(QO־_TKS6k9H.w»n)ZgThCL$`!1݂wIt`sDsЁBrM?7ݖuGo|ݔH uԪÄJKvPz~\T5fz2 Rc'Kͣb;:z$Sd[?J֫ŧp G1 PH4ecU[4"Qk=q"O*ƀP1I m{&ĥXT*jT~M,G&W1!55`|cT(.R^ciQcmNs0?n[ڨ!gy۳F㳇irIW;:%+nʡ1zT>,p)Z%o];yL7ӰbV<`.x9طH$Ȏ{O`B8OȘ(8r\h'FG*փ6!zR=Jg{z)(E.@M\xOc`Q m6VTO}6l]`u1m<6~ϭ͒ ,eM>U{U㻉WkkPkZF zչ;W}^G5\;=]4hݯ7v xٌbd*Wp^@J*q*zpFh `季ܬKd]4-u& l Խ=:g}&PyWnU3WV8fք.2+SيZ;:X:@OLf.Z #Pc!lI!!: YLn?_>d/$=d2LJo~:{鍏svv鰏ѭrUc?㻏fiME~ӌ&%R= |A"*WXk4Fxˮ}7L~П=>ë*Eљg<a?LZ(=qw׻yE'ϯ4{t푩Ed|47ﯿBRdprWyʍKY&9] Нn[ɑ Dk\y}};:FzMh%]΍ :CIpQ^\N4S,ՍNfq;闙IU-x)h>{}Ә=lk]oʫiaf]U 󔻿b`};>1 Utt޻mu;i-QSe@j\2@kQ<#mkV_㩘Tŷ9۷k$-;Ţ3MͅM7Q U}nҹV EjS؄ J3=bb6^ ֡gȩ3"d^xϒZKjm(o@>?n5+ω-! #,&":2o4Jȑt2`bmBm3J/KO8N.[jƢ͍Bm^ǽQB{6Dw.{Jڅ{C(e-#yPD<<9(95aVMG?_LPM^+ ˭G)W'\{[ y^F=%Z Zz\RlGv3_tgbds: H^?] Uع\fP͍:׃?7PjWXԚɩ?}s-,p(\rACa7E6hjSȤimi]zY֕ >׺PNۧR|㲙7v#ݯ;Qmo.?⍺F6qn=iyN`u1Mq09ƥqf P)YH:*zٻ6%Wrf~T ]7 8kS"J)eA,[GU:zEAȦ$*R*e8"!$g:^<943ͷ!2x,.2Of ?)MQ6 r 5"ZT(@~?v+{>6*oYA/o'Ve?04)cD'+cYblt HPn-Whpў]/e1Nf73"O!3o"/R%CP&$:ҩ(RJjC21L%0m*btP,aWA)ߝ&X>QB.)+.\jmPBKФG_^9UCyDKR=k`Oy NPD>7\aQ6;8>ʣ(.1*?N[秡 1T/JݫO:V–=~D}HEA|ξ_),\ Di_8WϏT3r@}nԩ}?f͍ח'7a z*̩~dܶ\Bϗ ;6n?%jwNbfTFa IlW];]W92rJ.*uQ]T':0ϟJU^Ei5@ܲcQ=O?*qL1Xo0] ǿӇwo޾~x}݇#ћxsku-p=N E0kZ0XC[EכPh,oR4װDќtY)uYV`)qh{~:6 ˯mos$ciHj87 ;UR@mQOQ4Ri4SihLF IL/M؝~᎒QIHt45w$xB ԠB~ ^ bYqxͫ[ Gj,ԣ 1}P`בYl"8=H N';[٘Xav N{+CW4Wt*1LabB0&2_.ܨ}lsILA Țik:V?AIŸ+Vk$^V)hk}^'dHt$/P=J S1"}9c#˔!ְ.p$2A O5Zk'm+SU}p4o)[~EyaxGouEO {,gCЇKxgg{|oV`h|}B?u%j`drK 0- zTA(ٺT-DJ+=FDEI>E=!D):RۨGܰ8]dn"ΟmO7j6t!du;Ͽ+ %" >K#Rx+U"hd4`f CGpQWԢg/DNUSM*D-wݐ* <²="B(P˵cR(1oLRA 4{v(P߮$ˋH gZn( % =J!t.S-hQ)/R4 :rB c*P| 0 kGQ]}m6uK/w^ӭ3=L w3xˤϹ9\xFYmF҉|ES*і@uhng[pmߵ 89POe48NA 2rU4 k|ݾS=C-\1d^$r1R 1pfJzYʜ:ENYg,glk g Ow"*C 3TF' D,g*5uHBwu Y<R>od%ǿvhPDR0`7yD$ RCî7 2I'PVgP_'mmw S"8 R"< 1yyđTRRqb8:1[٘x&9.!,^S#k>Qxd hXB 99GJ, )1}ѶVDPGۢ| UOt|%ԃʜdcH ꀢcR-0N)pD&CXo-ѝzLXgWۢUCv%.^m >R7#!o9ePM6lzгl➿>kLn*<g$KՉEV'6J@fVD,N֨xSL{w)b n?R:k֔zDb"&Θ  B1N5$KZ <SOL%R-,#T'vX9̩ zj]tivg(A +HTȝ "tJT@A!T nV+w= n`E?`X!5ԜsW#fgx9oKaKK 0wl;`v-fCEDVCrR?ӫ;:TZf|bkBԖ%[4^X D|%+1r<._" \g%" :MQ 2xh1a΋OM")|u x]cy|!LN?)>ezt8Jv?5]j!D 8:$i(?~FObޛThG !qN'yx< 3O9A2Y|*n9 >3ys+j7j;'gYݪקc7"D%5a,zÓ$x|&/'( $S>UFe u$4tQ )OMKm`u4"/"uI{ST)HnQ']v&5N |8HR1vVxh<{Ynȳ4RhCQm(0j*+QmYZC= TD& !T퐹3 ťlgUV,*%+E!d v\eqsjUi(g3tƃkB~n.ɴCnf՛q|5D'va ׊ AbhDo?%BQ۫i2~SQ jd N;<*^>J@ŎR fe4Ybrgr{z=N `%XbnWlf}%:erBgP>vz-K2;p9yZT Ajo(m&cY;Ɣ\;!2[#M*'؛;*7K)y/9٥=X퐹r+*KYJizJKn-#gD? gr?&[r4kvǗnA&|bƦZt9]zz!n[l=^}7̇{;:%8'\RdOFRBPH B!U( T*e(m'zӁ>W>½>äK|@'eWz 3_8`JnI-IpS8`^G,#fTrQ8` ,S8` L)0pS<˕=KιQA 34ٽՉEV'6J@fVD_>cw7qVb∅/Y[0H!H}`AURT LnJlsJJM$b*khu:%* *7F+BYݤ qqin4xNZus?2-3߯o6,[b/PoCPO(F_=kl·~Ղ8}PHXxNրK){C3jZ`tۢݞdY 7gJtLĂ *K5K1p8r#3!P+"r$#naI2Kj6sU+P(I&zlNG4H106lW|/G87g~_-ڊezHLRL籦Fiu/iq""1p K2*(@ErU3Faɯ\H F"7q #ts u+c3iKV 2v@260j~ꋖ:1B[mVq݁TV1iũ05᪴?8%Yb%HZbi"sT$crtT罥Behu9ۭ ܅$-.> :@eH-SB

PeѝG?oB1Z KqDAh(3*y*w~)!T83oϐ OS2$N=ңBhBI֪X utHS7 Ew]0~ϷM q4[Pht٫J<l"Wzy_YRuy>*)j,W2QxJKc!;2yWhC 9 ZR.Dȼܰ!!he$zq$9 Cj4Tuz"ëNjD"CRLprc(w>Rb,8";#g30 |t;-AyXOn*fQss-y2v`dioK˦5ܴgpS/x<,qJ UKLଳQ0|C"x.:%3^ <p>@QT#<B2܃czZġ9Kh$` Q]*$7,աit4}FX4Pcz2 Ev{B :E1iu{-9ǐ&úMNd8y.!շEڟ3 9.,ɧ0 z_i8qu TU8E RAZ%uPRdWã|z.{ÁF . i%8 <]n[H}0ҶtvXs_vgz̎##$wݟ|W`U Qs{'o^:#C6~={ ? a޵ݘn=3]Ѥ:[︐4ofws4 {Ys`MS6>XЩ8Q%,ZՐ|Agi L8.nR闙sY45 _T~)5b~ә>@!y{ŝk鹥\w)(gHfb.LB]PxKjon m.Hg 0nvKgʳCt-N:q;mZ)5ͼMu;,3]8ߏ n.437PS-rcsPd6xMKqq9w`i :xGj)Roer?nt[XP+Z; :+N% pD9%AS LpL0<(Γy`~ޫ&W^A,gRP }i%ا~:^^<2=]^F2" N"mf^#.FX Y `Q#jr1PJ 6^rFjx G* ԀUzeDZpBFQN8e%#O]g늜mΌոoOF3UmaZ3{r6|(qK zp5b* m.+u,t,*_^ ݍx9}TvϚWuY5+-I@xPM ;(|QLw֮.H{ֽ|w@n,n9qy\4=in -%49!X Ru`e{GAK.E4t&d4e^>}ff%A6\-JŽ4_00/:>GDsɒ)aN6- w1۾7Я~}mxL| ca"Z2d@5rít^]AEo3 >^$j?d_=.qMO#.ZN  V'oK)d "E }n1q؛h;ѷM4F?A].KQ&nsaf>FZL7kiA޺ EtF{&xGB(. F8t(4LjMeD.4N 3x%LZ˼G놦hIvms2{ۂ3SCpqtdY5+|U{&\6gϲԢwVg45Rf %t$@`(WT(5*)(Pd1g@7$ヲ9yO*6 (I"28PQ" 2KjsAX3TcLPm_0&h!D@xB;bx!IrPd)OL`2navnf<كRΐ d^Bx`.bK_NL|]g.M:@ x6(H .$PiJÅp$'2J0cuˈ&j,h` l kOIA n+O\bPbI7he_äTd0D `۳샹KeVR_nbFby7=ch4s0 a'#rrGy 'oO mٔ~ER@K6\ ޜx okm"L$bTC!Swz w3t[^Dggwͣ(MU-r)ǽYJk\+.p ;UKcjj͈_%?_O.Xb̜+7HtnGIv &ʞ\U[7n[Yd(pI\]&'Dku6ȶ^ʔ6wcd$]koH+] Q2`,f7 63vF?cɑo5ICFj[ѐMIvUj9ϧJ<#OqÏI{yRcdQ{g8߷?O:ǷG}quOןVv5?Q!N+oN?5WW}h]F ($rǻ7U},(D$\$:;GȬ0yMk>_wqaǵT%z0$朑Z`N:2K#`MÃY~)ag455?֗dϷV1Ժwv0ULn:W3݂j vj0aY߿ڵqFw@QWVDEm'AQq*+㖲fnתZBl|g )3EP<)4AOk"^7)WS%-EkNԇV:k֔z.hCL$&- ygLD6?a>/L vKcEG[BG"#yU%3Dn ~sBcb2X,֝+z ˆf'_R%5vٗ*;oٯx#wEkkӬݖ5g6 ,c7OݽlܺZ\5t,>UlO{q%VΝya2qWRocFZlw='Ue|t]Q>۟5]|L.Z>m=V W+-fn57Nnb_Z<7GkC!6C-׎نJi}` RHU.jrVU(g}rJe:VaV:nj’6y!}A`P``NQ-hYBrK>H$'u<3TVT9T[XTwO%:Vn#ƗaY婗 (ߌAmD9(!->˜pemګ,xC*і@sZn1mKVa u^hq $$eThxaFZq& 8#Րglk`騔*F,?Q9ȉ5Ίv{eW򎁊wDTއ"&g{d;ǿvhPD_pɣ'Z%)"dz(Ӯr+M jZtܿyvn˅0%,%bsGOK8O TJ"J% (8vJ.ؘx"r!!"^)S#k>QC18 Ar%#%Z[[ʥb|ņȺ' l!̃ʬצcH bcR-0N)pD&CXoQ<`l^G QlkuqbİI6ׇ= H䂓YAB-6R E&}IQ~Gw\q ~< ďV~{ý$79{[^ۋb~Ǔ/uX~=~9A'fݥ^n[6ZX2pYE^6R_μ?mAWͲ/ x!W=z6 ^-(Y޶oC?g5(p &:o} ^wڣ[|1N~yszn8]`f{gn(/{oꕢg|tStqH JXF6Lkހ%qֱ$'LmơKC?~45%N%( HӨ^JY=&9fW$jh D:'ceUsa=76pY$$O5Zk%}cJFP8,֝^ ҝ7Lqd41}T䶤x{ږ25)O_"֋5gv&(7B8-3!<@F)QV  u@.$dk>"ٗSM2Q'V@P%$uA;Zm = D8.DxPeXGkk11bH8̳?㔮?v2̫*+4u5Xz̷~LB O|L#$K-ZTъ<9FVFmUzTyQ0Kܿ[(Z8jn;j1a}fWs6W]-orW +R+R+R+RګVJ{JiJiJiJiJiJiJijm^)^)^)^MV)68ThVhVhVh~GhVhVhVhVhVhVhVJ Z^ߨ}>y/QA*RY1:4ՍA7뵅T b{|w EQr >o)U D-qFNG"ʖH1RHwxpV[8QȄQ"e!JΊ)ܛRxEoI +挖,z8HL(hiE eV5ᴦ(kuT鸷--5\"Ӊą!`^x\Ҳ5 7/#["wcIh&;q 1i6\,JFCȮU1 iC f9c2N<6E 0N-,2R Ai½ʒ@0ÌKW_Ll|mV}EȮ]k9C;EwLĜ! CnxmIG99),1A kNBcƣ["V’M*lW*?د^tǃQpp0g7(N4[)s6ndW^-H 3 )ŔE ygLD6GB[ eJL﶐?669ջ4Afhpr\? < sxwhi^~ 6nr#Zn\[Oe^´Ze݋*w!HjPZ!v U 4٦[ yyN>S-,m"Ԁ'q\!y`NE]1TKַҷlsJJM$b*khm:%* *7F+BY⬥# 嵸c#^sy.$\ͼW^Q B-A=J -̞Cxϛma04s`!5٦Ii~7 WP@oUdkk_[ im i*/'!hGau~:-?OB~clmkup5nyeo/>!n9Olaăwm Yw5{槗~38Y/DpjF}!{12׶>{_~jE5B8M:q,I*ѓ&QBHAv:dV0"2R&"+˭bp&BpgǾ7oƘ ]9{{q~ܵYIҶf<%:1rY&JV ɀ&P :KL .M / l`.2$!zC^Ѫ<X +tJE]oV6y3i CfN Hm]Lҕ!:5s+|ID%PtU2zQ[ն&Hڳ絅1 J*GtJh%Tڧ`6V'c[$c]^/:SO wU݁8:w#T~壓C;i8s6K44.A.G*Vdׅ7%lsK7+0u%|b4e[Z Jg]H`QHD͖(] D9T1$-R,Cu46B۹[#浐E:%^?gz:[wi hϋkMd AtB]),e# Rj Amdl싶q-PwXxXjkn6u=IQ_=px8p0~ር)jXSt"[e+%yo& ojSy2#(Mlm¶p2lxSkLm[lF83&hmqmQ[ZviqU^"`} [ .g_a-3 ֬Y+zHvm ,jZ"[ȬJd+:)@& dXqd !梳HlT mix*[""!bZn /b# (3v`*>tuBg㥭^.y[EDcQI_*[Rq&!#"[ҀV+1*-#bkܠBGO'pm5.-.\P;\#JKl@BV (0hLVa"Z}akq[<--–.kr:QFn>m|* E?>RN9 ͢QhqzkKGQ&oK}[c-=~P:͏t3n[Hك]"w=d88TjB KrFg#ʒ| փT@0b;ޱ%w'ϵ rp‹gcvr$p6hR=8r%emY-EgdQ9^X8%y{+@xerZX'Yw#IUN=q+Sc@C-]-7ʤX"1 rA@( BP7;YK? o!p7f]nNԌ 6U][lI J'H mU{ gv 1Ap'6e 'Ka$m`yxYoRo?Xz\j URTG0\-`=[s X="hTBcb$GbHQqH{3seN{*,!%ǫS.5%pG:6sJHα`byK;a2xx32С-wFG% ITzD$ls r%'d~"N+Ra2^ S{@|xf/~ x쥾d!0V5NQ+rUZ._mLZF׌WT9hνӼuoּ]Ƌg+¹%1G>|ͭ=(bVx2m*4=sbHrVʑP_=Ѻa`0ʯ,cA453Xgb'1Gu/vT׏xu{W0֭uR h!#u`hj'Ưҫ]#{&%NݙtS<묳pv_xKp/} ^u30T렮%ߙD43m -Vo0m|o2.kƽ,>ޯ1hi6ޏ'Yf^kS7V&[ŃtU^^0ܣd JFd$ybZ(FAW *fV"'2Z:Ã{om:lĠ02iI Km`fwAR R%˃ {6vZؙx{N悖[oj.;_FNK/S׫GaBaj?iQ5/^kuݧz6hB,}jP ݏB?4Y%vE֍ӵ*6)ٝ߬teEtf;vٓ=D&zRKIkt6Rrn7 !(L5yI lUN3(-k)LPl; R[l֓.6jeq}E>7lV>r֟Mǃ4\qVvMMc=v }oѹ{#*k Տ BH"2ʝrQ$, jLmʐ(U7%9!b)V2k1{VʴIVRYf> ^zVhZBլH9)(~U٬1&G|khjo2nm> a ΫWu do1,. U1J)A84N1( FGΖb'\{B 4FÚDvgl sZ9+ o˾W;#4ݯ?S-chOyfgͿϟ&ϟ+Ih4>V{'৚'dLw#wa@ӛF}sx䝨tr5l\4:J/Rb,lYΑ43fѳo_hiܾ&^"f]%Mmg|fo!|okx%hav뽝]j~lfԤ X PY&~fz2Nf^}ӛ-i'#H°h%vwz̍xIj5$5v'׳+s!4B 1A}d.hoEzap: lQܢBuB2@)Beb-LP˅ mm6 C"b(Θhnw+4gm1ig{\QMՔ/7ﺽ$Y+"%m2JDhCۮz&;LMfu[7_[gs8͆$xl6r>\$ԼyM丹W7w>7Vnx4CW|Q%s9|C9=蠺M o}MT5hz5ypêCz>tptʹ}Q9˭s%>+Vmѧi^:ݛooG [ԃŔg_s# 7RJ-u͖g[L}τ+(R61IBq- $!EޛbcP`l*1jE]+Sw"`z]E=sC8?8mmrQܲB>X~3{x<]^k?>Լ1uo+>erpP.9e%ې2HmTRj^iP,IZ>TxZ*^;`ASa >Dn&E+L:l Zh#v c[ہzW^&ɧ7K][b?}=E]9a ,D=()J >x$$b(C2jpQ Mvg4~y4IHVgdu^:`e Sdcǹ(Z6H"@4 @1CV!VpvSGF'/1D6Ph%Z%Ϫ@)- hR/JD^, :IV*ojFfo8TehQm f!>7'%} -eG3\:4~?k{KtiG`NRv>'/y}~UAգ>\st:=;S+na˨l]?Y|l82&lY{udu`[9rK 0>U#CZECbZ1H5b6A}툦UTUnݗqN*NH M3T5Pu{ w_alhb]ʴ" }Ν4Ku}2Im42w>KY;yDұ` g@<ݾ'$[&r4T& 7k$UX =9|Hx'D?u4.ˢ&~SCS( b4 &ͯo7gԘh Ii^4Ӯ44}4-nf9L G 7ɒF IP%o&: (pD~ToO܇tG|OEu9,81R7fQĞR*\!R/qޠ]kU ~ϭ(Bhe"@Qν:2]jf/tH'6駷~}a3-i$q<'YqNqj s[&eL=cgQ6+=b^,pZ^hAvޚhΚ| }_1[[ۇu 68E6pm6жzvq͆+Héᵾf-#TEA66_m)%A"0ffY6(-1fP^Yѥ[kʧ{WZ|CDWN_}n-jVjRV;C[f ᄅCWP ўơf(+tu>t%DW BB+DI QV<{:V 3==ufhOu[v#kWxt,t0 U` R ]!ZsFD){:GTf;*cb#* %XB()>ڠBJVQhSkW:^5GU\7Ar"d)H# nΩdk%ڛ !d `Y#\F#BvF4Y dЕ)z; WS6ɉ7e'rkw,m@W}EO)!DWJ ]!\MC+Dk;OW\& BZCWרP ZNWўΐ8Z N++y(th:]!J+z:Cj+\͠uBTBWst( J2AmHA4  fr"NW{WHW ~B,.%,-#]+D{JS*H A15 l`aD>РYڣil¡iku(4 h-YҴPS{dr5#MlUGtιf it0w#6SQ4ꫯېl`l*p1*@iz ?%ez3ʂ+˂TDqCW`}=# 4'lؐS.?mC4k QV.]1Rʲ=]#z*6 ` !!95D)hOWgHWLp.E@tũ/hiWP ѪԢ3+.$g< BRCWWP uBtut%6m؄CWתP ZbNW{WHWRX@t5%el#ZvD)IOWgHWJ *I@t et( JS%+Jw):2aCi TR&jL5m3ݽvI_$L;MhIjԹ6x pLa@k)(Ms45 KaU`%h_-,YnYpm]ݣihNmR[q7u2ZW߃jv(+XIR6T$q2'ux5ִٿ[?̧G*OnWwndKԐ?o͛Y:?Nߡ߬rQU|j|o'P :V\GB~:,fϯW_}WTC`j(//V9zǝJNW$mT4iNJ-06$_ 5Pɦn\IR-|tàrL!щNZL94DϹ7śfRH4M]0֞*ʥKn n6\-JxCZ3ݲ/?Lڸ}џ;4Gt|soWWz7_F9 _lvt0?kX ;GT_GI1U^>\N~^Uk䦂iqJ LU"晃L8g`("r ٱ~6r׹4m+#g9)LFZ6# 'dTXS$Y95[HgSia:`;g :aJt-ekm 9br6J%gy585Y1Y^?X3\"Ngg6MNO"'3&Υdߚ<Ԧyų~9>vH99@WD]O&f(aR%̂4Oѹ*K+w e52*jJ.ѿIݏbMjـ*3L EY*loVu&۝+L,Q"sT 鳄%)OW:0}A:Rxi}O/ofFq@o,z4qrϧ3*f+sPU.;*;׳_V&:(7k(eSXKWFGkP-u7 XS7LF+"qQ$^Xҳ+*DZk4{. 5 i:.w[K)T qMf>0Z)@O_`j/(he }逵F4+׿Es4@UH~楲@ I6-׭O0%LeO,rntOze*cߦ@=%߬Y],g$z7_Io_z2Y?Wv&U.|ʋcՕ\YܗM?ޟ~ &X=dߢ3(c'\?}<_]d^h /r> HCg,)L@&̆' iC1isʿ(|{*p27Lxbn Oyr>y3|ѳP}enD_%n,*kD؊&M)5[k 6TJ4BڦSz` 0vyx\\R˕ζRCE*S,DDsԿC4`<7l8㹻Qh'q4OK̳I_tȴ}tdMԙmyIkp>"̓sXܮYɥ\??1I%t^jN5]`O"14^ l bK m2)#r N 0]dǷsƱWwl'[eEu{3ݟbhckbk|O6j&y!ƜJQd8H݂<ԉmBhܰ"A:Ǯ|GA h7l8; Z|\v}~gޱ)F?^{#+'?IS:}R%V%:a Ώ: Oy~9Kݝa2PC5Qيl=TZDzlev]8n:l1;s"41Mr*;HFt1jt3+#sg7}RiUzvz$QȾu~;a_̼0B s!$L daRbssA6WKmx[yӛGo>&+sx.mINbOt/j"k%.b\ J, WJ&)J)X"V8 UW{c 5yQ!Lf jل*nMoM>ZrqQWM/zHN\Y֛͛۟sٻGl rw }ӑ\nj֦7=pέDJ.~^n j ]>G$E֦˜٧L3z!H+ B>뗦˖VЧ鰾#> l'&,&MJ!.fLk`zKz 8+KIqcyS!dqXE}5뱹J ޥDѵ:g⬍ B"礟 $O\mf?>:/H궭'3{)g ||.f6}o3Gin3h/دC?KM0"ױImRmRqfmH7ztu3/}*/`vvbqʶ'ۙ1$;@Z|o^T8 3QqIk 쏧P`U8LsѱD6Hܚ:t[C3&K-k00A%R Ŗ @jn,?@5ʭj_k \SK:H 2Up֑ !\R'h ن}*9Q9c{csi/]ګA[}5Šy9e~ºz)uUrJzVK 8['B;4sMt}ػu={0\%gД!Co-E"ZŌmEH M":%3'(5f]679DÆD`#?d'0|6KI'WZ%Rl`Xr@$N¤>ƮhJWWA="߽UPldu;"} :l8;+,U'yϾ0;RbPZ[U=bL]M^2y^)=r΄Oצm_KաQ5p W#Кq2y=y> zl(8Q` YSy2Vkc622,hB|-1,Z\cnFGay¾02,-c7cacNc͈]|xCW˃rW>_i7 S^,N?/xB( {ZeM&H0]RX<0IVEsƍ؎\C 16]% 6vR԰B+L4<Q̽AcۨmFmo=#؇,>dQr) B5u%UJxX-F+19PsOc}!fu5oQ"CʢomPŔTGQB|5EI<%ΰ<W}ǶGD#dnN BqQ s+w/| ƾʥwhDLd Φu5"gr! F'T2irX8l8O=}Ԭqq9]9>um`_8Ň,ђov chE@<! Z䰲a]+s\E\ܗx8:~p<2qx!Fs-./j|t5Ul.4˦'ُ/h}ُDbճ)yzz.A m ) dŕ |;c>;=w4{A Dia5p>HܐLL91C. KR(k-eHRߊV,WK*$f:Wӛ<zpWqyo~]mp~zZ˥rg#rr40^6TY13^}! Z(i3llo J6q|>#ZC+||:\/_w$ !7,q\!' #H&' rsn4&7|?hMW+[C^ײ/h;x4e15A X0ɑK>s4rc] STG䕛[9ӣfyCv0/Dazzڍ}kћRކqb#s?eKj/6{6*(u?A[UݏBu5b8zBcH_ /c"ꢤ/a-) N ]*䍝v?GpA8=30YXNE66T-ʯ`Qa6JET51x^I֊turRY<\|SzzY8FJaOʶzѶK"j_gw/B+J˜,K:[U Y] +Y,a"0-Z1nYr,֕\Uꪺ!բӴ9 0|~M\| d?=k<~1[YX\|~x?I??<~:z&Gz ?fpx+!!| a;V?_EѲhESdߤ\dW{}hBZL J?g}9?3zS6fy/ 3?0(i, T?^E(ܙ/(D q7A`]MڍdtYnܣ$z⊒b<HZyBJwB1AGR#LnBgqxpCyy1}G;u@ hEbNB< *E8a֬idcgfqИjҰCF7e{]mdqe_"X篿P9w~ YltJaڰ[>ܽrq줣ǒ( (}:Qj̣r]J"$ D&>@V[(H<+~xVNI^<{O~2ӏ~ gnwF0Oߧ-瓰\}߿a'ܵ?PJja^, yVwгRzAqKH(nU괥U6YG\`)(bEwa#+3?޼/WUJV|2{d߽M-,ey젗h.F9d+࿟]/9'ޔKkO/ r!rFACsKFD¿:]qu9XD ."(6"Dt !Sx̄A\c *04D&CD{LhAۤK)M1(9 Q Ƒuܮ'NFu: k_*G/B˙s\|-C`Sr8߿^Mɫ,`b RL2aj(5*`3-1JFgl7y_6rX#\b/eno;s#c3rn҄09R3@++"Z"K$m܏ܵ7'wj{~',+w Nhp>xB؁$d#sXN8=i(~DUL an0B*Caj=H6e3zl5ͭlxPG2ބ> Æg(Laޒu ousq) B& |l >멚$ޚ9.ReCsd:Os]ߚE2u ͳ]׼r^$=j3fRhM~Gudo2%Y~6zͩЬ_' ݸ:ܰHjtα)4N7oR7渇¬їl[>Ǚc hG˕׆)Wn,Qv>k[ |+] Lh oۖbB  \# #QZC1$F $ Fج{p?^,J@z~~wh q<ӭǧ`ىƤmqw4NS#:*$W+ F%QyKV<)^!>;bሁJk]"sbd 5(2\JZAr;!v  6.2IޝlB9; Ո*axD@:Z=H SF+rZCh7r,G~T73yy !PN: Cd ]|~2h0|Ƌog0=<3&P_i7K'u,WsM_}qyx6=\j7MfoƳ_I0×1 E: diOuO _Ypq$|Cz<%_8X*eGƥgKjHU` "^뚁X^cz:e,1 8dK79B/A?ߧo& 2,*J*QdqsgD=`Q( &yVm 9}DՋ5YfʞYMXJWkbV*!P ]ɥ]o$"3O"ZvxqtqIΣU>+pMh ~6p5/ou7|~l8gLkf b=CT#ȷ\f;\z'/lO3vQ}%fEmti;m }Mխi9L[t00sz4kFZ2d$MXn:]-S.;{[n{p}0d Z8JMqNx)Iۄ!R4NƂH%gjMsFtNDoDTCۂ mxijy,G# ,[DCtNIu:lu; qWAzw ҳ| =ls ʅ{U|[}EQJwۓn .Dѝo ,N:D4; ||9Ϳkc-XZJN:ztE$`![CW TmV}+@ T6\\BW {J(uGWODT-th"NWRK+m2`EcjOph ]%-NW P]=A0m]`cJpTމw+@)P']=EJ`XN_,)9|yqgx|^:Q(k؟V3HNX!fOۧϳhV&Кc09D& p_߽L(P{gAQQT ljގ8(Ʀx>= )3g6UA6O*T\<ۥ{A Ft4uUGy9s-c1c*؛H-D<@tʬ Sŕlz1t0!lz/nCVR7Jwy♵"Ppn9 F9a"&jNIqם;up<y,![nu '1beP;ɨ+JԚ=M %@ثD`[CW nQ2JS+ m0Jp9i ]% \2}vtv"v=!lnXǾn\tZHv($ֽ+]XkQ G߲.mV}+@ EɭN=Kw={/=jB;Y{ɱ6$Q('wJÃ&Ӓ"60&\֚m܄V}JM:Q vLIs\pRm1N|CtE7zCD) j+ ?>6 tZHgzCk݂hGWwzm+Dctpn ]ZN`]=zsVca4Yyg0H5&*Mc3s'Ugucm?`̽3ԿNj1yƒjBYpy)2J3$ꠧUTH30KaKxX!t6:>M F場RP->'F݀F8L_H˟͋?_I1f]c+-U^V- ?cr2Bȟϛ!XYUF3zielsfb;fL!f8c[u}v65e[rZNex*idn%qu4~_0a ͔Ch3%yM&VCK v{З ɍ*=Ї GM`l~վ|k>$04aX)Y=7krK`k "J?@`VYkth!Ѡ˰3ҫ 3:S"eſBi!)dS bEo1x-&{v:M>L4GbGj$(Q*TGg<* %~YZkCE h%}{2:8:2kX/H1&؉qok6{աR-`K ̆ 6alBwkYRpPT{B;}t[P;$LX,8+h&Z _W;d|J@z3|Wv&2M!-'P ]p,ʨLhD'0i"n4[SjUZ]A5  .0 ֎Iy@C\ZhPl-KBa'PbK@42*@A&:ĬA0ཀྵl %ZM= XBUGLygT),dD>@PS0y޻Pъc4 uj\ h=9<+:ʫ#lF7KJs;A(X([ e 5P!omeF4$)C"OA{)/оŊ4dc6)9yh>q(hH;,'ka /s~ ;y{7xaioEnk.Źd[/NLzêʏg.0B̔ZXI| (#KyD n;@v+YT#$]+T<uq2I <)x,G/Z,,tE]@) >@ &=:kXeS@HˇCyW3`^RX"ɚl>A& q':Cp/} ن)Sm6dUW+tmHmzԥf>| GȋU}L!-.x W5MLpO5r1 j#`QUWۖc)p]}+ƈv( B2jC - {,hKWՌ\3VO0 @HDn@GPFx,CJ35ȍZ. py:Z'xw؛P:<~,KʰNJrâ*iB`v1Z7N4=>UG9D; j3kE k*iZnxfAV-0xx/|Vu31}t\B= 6b|}nzv|—bo^_]wętf`0u Ɠ/<4z1Q6EQ} w12,fWZ'LIٺ yhYDh4v/fcWrvQ07g3ۮ*3,.w% <%:aBrx%׎n?Ԡ F =|&:U%\+}C;80Z sA)a>zbY84+tOϷ7'84`[r|pp#4r-՛)6TyǴaxDsbb孤=`jKh!V..D?F%=}! r͛"o8 駛gϞYo2D3ycWnZ2^7?lM>cxCOx3nvs\Z/ov3uR~L8FG*(#xǰ$hf@$w> ILI6$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mn(mfv@A7Z> 5 tIFĚ$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mn(R([;dMiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&N' zJ pt=RU׏6F=wkW.o}G6.AY .DpM%!`ﶓ~3t%hs2 ҕ̆6DW~kѕuv+t%h%+qCtE6CWo׶+ANL[wfJR ] eT:EqKdfJЕ%{t%(}LU9[v_~wdZs=<ٮ+E ٳݟm7monc-v؟?^^~kh^`r~+33OuS0g9_ٗ\zz^\.%|uٟ@nw+.@=y {vʾ\~5'{Zo/ڗĕh6?~޺Y} $]~#Yoϫ~sل0=]|SZN ܭQʓ?0|'YB=٥JO\L~t:G,wsai.L^*׼L'G8c(6${#%c[@%G'({%練e n@q6 A UҖԕNٲ\VJ~[+Ay̹>sCt oLW> -󷡫 GftEJW_:nBW6ѱPJW'HW.h҆J ] @[+Ar+AylOS]mp2+V hNWVӡ+ 1ڸ!q;fPͨ+M͠tutxo6DWLf;t%p] ] ZNW2:~Ct%ذ۹w%p3~I:v;HWљh챦3VZsfSc\fVQn ^*fy漽FyA|ݳkOu11~F vƎayn OO!+UER")JYL5S]SU]U]Uk,I’?MrG ۇ2eo"a W~p%9P]u-lu轸cvbfˣa,}{/Nhu.R9KVʝ«ee%bˍˇަ$Z|4 7M"r EY\"7x?-qJvKrz?>IYCE5 AƨdcT.מ\|3|;:qgGnI (,@۶t뵛qP!,H+Rp%z* $7!̑V0*oAow8Q ք A2IViǬQFYJy>%&zqylDT^Uiھ8m7,ͮlA }co|u>.xPtF+Zi<42 Ӄfv¹%L.-V)YDӜtn4D >5]7|fN C t9).Qj.tXPFX+̔k# Xs"9 1j5<[1 D wy]Q8FL,vll[ѿ[OLV?8W."ɇl>2 0ƿ>4$ɾf0H1kyꔘ%CR誫i)].:mq(\@ZX.*Gע/`A`:MTFc]_g$r|.)IJOjx$gdzAqvC6MMc8n6 a8-$LOTI9L~VTΪ'3_K7ջet>;<[V QS"348<֖.9HFI>>;3vqa!a$\]qv$׏44 :sq1{59Ba"Q[Zd;5l0g7<ȁL ,kPL1@ 9⼎&"AZùN':E'w$2\aMΞ3Ks`l5olw9ݩ8szҡBTcH(;T"ӡׯ55~Vz^FfPdc^Vx}[gyVNODyYvq̚`/bDD/{ r%%%)g<=HfbߏebzgtoQĖ%`%Y]O` ˙fYM. c"OUp>jxZН49W׵98Pix`,ڤXxn23Ńl/a=7!ek<ʎ";FlV5|dq{ tH;l݊-%ro X:*`%m0ƪI[cR;ĬW逰Z0ZY@SJx-Bc=3tד. =Y'nj뵋dбܕWs\6lKESxܜ~<_?jfŊ;=vͥ j)r sl+E> c7 :Fe %u=vc޴rii(D(\!Q04\`D$-hA9zX)."(6":Q҈)W!OsQjC+1I:A=Wtg;S<,稸h皤޾ݾ%@Cϼn[+^WR0kZn. ̓E[HTJ5 *%w⻟SKڒf3lp[gҝO:3[=rjy^Kd/[nlsrobWh[/V>:tKh"ڰ! g``E-uϖt֜-l>*-zY~qjmZ݃y~ LhX[ 5 pqF, ZDi i,b5#z!MX'2""&ZH0<|1=gbj'rS'0@Yo>9<&dnsP]}moww; ۝e+96:ziGuTHWHS"F%QyK^<);H{k2>`x@. sbd 5(2\JZAr 9!B v$G{$ͮI?ɻJRJ=o(+MVt{H1"P=4<V吵1oDYxӍOgg z6Iǂ䞁:ZI ƎR+`bZ0# GR!At"yn CNp?s58Ep q£A` `YedSzl% "삦c# ;:q^/uU](I)E][v}{=MHv/u!ם:O. TI&x6P갓ň%-(e2Yz PSO?Z@/)X"g&r?{x:IY+vt$FG 8dKo<8B/^'7ӏ2,#TTv(A'o\Ԝ@)E<͜~Nf$ !ef˴Մѕ:?^+ZiЎ**(tcE嫑Kk E5(o,VL˛l"ǘj\@r7!*eHMQ)??ǸG.*naq8oYس{F6CIfXq=ϴ!ɴ u{F.jRb6,N9H;䴵e猦:kȤJ1ϩqc֍HHt9xnZ.Qmox]헋R‰vV"m"sKL:& w2`7Tj-9L*e9((,"":: / x ˑj"K>p$S@B#prFFDy<;Dcu#1߫[/:[zŭ9uo=r LuʚLRI؇ѻS@H2=d{Tuy mQMZw\旺i^]98.gϼ$޻֬pBS9SY㤪 jj SNaߚzjVjvĭp龆'9:*+AX͹R1BRm-1NR O0Ɗvjhٻ8r]WxN!_8e:-L 3IֺGҘt5ԬjUUK"*؊jbIRhǪT}S12|9OZWP|Yϝakނ|CO~5.'_Ϟݜ9vzc0 f׻`bǩ/$~}o5#a(:č씩IBrKRocak͸?PeAw[o[:c=_{e;}c[3k"f!,AM._[rXg'0s4scʩd#L̜qUϡ3w(Z{QS7n2,kziVs⁑*}ݵZ/(|DL.SXЇS̤PW{Yi5: 4pna;CaDg EsIհ7Jj!AZxKQcN8(>:B2VY,AuIx]]CCDb(0sg GD $(*URB,V8 dbl91UQjTu'%F_3oW yzO,`k4OԤwK1[@4!S2ORL++mU.“v"VP5X@1Tv\`?a#z]>u!`l(Q`%q Ij[6dhgA𓘚W XDZiJ]12k:Z>ƍ[bMv5g,6nGDmYm7|.7E >|ǵn~[en 6l2X%/ٲWpcZPj+\rɔ7`l*g;>+C;w|"STcVZ b[ m2)#r ^ 8js8Vz\*e>/Ry.gLWoXm0oY뎸k4`8w$Om:0ZYaJdib4Er c]:H||VRM756*^SD9b=֧$༲QKM(bplYIPSEEszz3){8LQ>Vl[M貒Q(Nﰨx̭e+fp3܎64JiQ> ),I5-Ss&N}T4-d RX [|)V%9gGɶWϐQp",sb+%re,T8 U_6mTuf7[ 2mӻwr帮:A|/:\*+4𯃃ۿywnǓӿO6n݃ϩׯ{ǃ'sZe7ꑪܵe KvR{41S V736nOs٭İL|3`G\?u뗦˫42@+i:J&JlB41rRF 8B |pc/zJ@ HR39Y*Tјn"4_ }J} -|&-q nnؼ&ցޟw}_l0n 1l +LwL65!&7׳.{ϓ.aJgV {U#c/@@o-(=X_l#!~zȪߐ^%xi.榠z8edJY tom%$ phFhj>vaRDc-֚Ch_P һD4-mC4CO2+l;U|ԟcKj^W[_"bbjG7b~.1W2?q~5x:N&OL6jWҟk~.s3RLT"5bjϵ,BJq% VAV e}>磿iNvҷ=c+t>://$I6_`8'%ź҅L1ї䂑P}Q$Hwa$ֱ%Yl}O: )eC&IǁbQq$Z|֠jŌ2̜#^ E|sloka b(  %2bm?D5.mx}h?D i TeMqGmamvn |xV ѶT]7[rM/JmD]$]4]L+X iZM+kȊjcbHTkZ!2h^N6]%R $G8>;=>*sUׁ7.IѢ2R lH %84tAp3PdD lvV|u<疒iq> cv}(IHQ_&zMd=Є"&c$.Z.FabW/jQlYI [\ߺ'][j֓\ڭw6@Tr A9c3EWU*4=ho>lh;PS MwL^㩜]5O9j&A4(q*4W8JKy/0}՗LʛC1J7URQh.JFR""XdA1c˛H bI2XϵddĺfjMYߪ͍s9!nv0s6L3˜Ia(3k0eЬW#5Se'"5FB6khHAB;*Pl6fuQm:M=RRk`"Ijtff*r02 Ӓ _4XKG^\U=y\Q/0ӿ/^yF!KƁhxZeM&H0}=Rtc 0=z:iӇXprߦPj4mЊ4S"eOOŜ fm;6k]`9,.7nZS}̤C1bw r.Ni,obPC ʐZP]~ksUL NQs6(G 3|"S{bKFfD78#%#.Ar[m!+qfcԱ‚轷S%c֐l\yfD֘{ xZ<Õ  6zRI;>G ˢΈy„5/OW5a^m^<3XƬleh$d8. PV6,oeɋ/yǶ- _ ݌ο_ȧn^<UeEpKn>=5ͭ[ܧu+)-4hQRIWj.YؗtխMvJLtӕS#]AǕ+O+rrkhr$/PQ s 7koG`] -թ:u% &eƃA:Ҙ &EQh%2I ~S)Vuͦũ&n@<̩Rǀ *p?9N߭҆Do.1Zw(Pż=^M+lE+~}MF(s1,.(T<2Woߜ_]-/WNSA?߾F`v|SLN_$ %/%q%q|w-yhmE[^km-:|*m^B˚IgoQ;kbbǎWf˪7P*3z5WZ^*aqܪW+lWoP X-Uj1q%PU+`˕\ZġUj=qT+%8U H4$U6CǕ菸:D\VpҐW$WWV7t\J~JKV+\zƮXY!(\M"^jprU5 ~2*-:@\Y)QA.7*Y#0@ce?=ʭ5ٱ/vW& L}MRӇi4~ץϻ%_/t' JL= cq.lYWE'*zhRMCn)mNѾPhw{vZ4Ϯ&"֬#Dj[VCie *o5694)uM; 6:aJb Y3wVi9h ^4c=b~ S?U-qU;UoJ?G~Kzɕv <{%U/F W숫V=+ ^r0~ճ/4jJ* Fmk5WrpoqEp&\`Zpj%WRራjU+lD5b\Z7dT!Jk.~+JYѵZ7t\JF\ \v\\jfJ-RVk!x +zX-Jk? }G`R; {^թ׶ ZzHkҭȳ^+ajtV=D,W,WZpjJG\ qR4ÄzY]Za*ָW/۲ݱʈ0%؊}O'wS Z-܈V=XcW,jprWk~*%:@\IpEr]᪓[bOJF\ j%+U\ZTCr ƮX0jpru5+Vĕv!V+jg\Z8qe1iQBjprԂ+Vkq*quRiO S;iwܦQ=ʍƯm7 Z Zp[.ʵӸfY;ljhdY'$[frX'veM!hdj7t\J;q)`5\Z;9ҙW_ UAz^U?z`?U?~`K|\Wϭz)s_HS\\t:XW+ mE"E5bZ+RA:@\!DEB!u5bwj:XquҀFpů&\\JC?QǮW"\`#˵\Z+R沍:\YCwCb'WUX-Uj?q%Q~Rԋ ^.u2"lӊF~RJ̡'\M_ky\M3zU'=XImO#' YeVc{Y^V?$巛pE3ÄjI*JF\}-RbWXHb==0)wϯa^pSOJlŹ#Sԛo.O\\zE=t\JF\ Bm"\`Ŝ=jW XáUz?q`Zpj:X%:D\!Al\\kbbR:@\iDeTE"NHQEҨC\#WZs-+ [{ud*9t\J=CY8 y©r2QoZnЧTlA BqFw% S]-[_Kƈf>%2w=R ʯm|]o ɋmeAG%|Pr9C,J%Hm "kEE $ѴKzRs{6W?\ٽ~5dAm>N&ݡfӫ ͏m{JgGk3m[rT]͗7Z=o_>ռ">:܇ȿhv4lPxĂ l$oTkG=˜<}O-^@#^[qb=sVS5Ϫn2YߣKr\zm5*^ׯ @a HZlUF&64Di|"Q 7WrAqFdIbxn4we0:wy.ar{ɛN.>zV,Ol ӻf2C_'2)c߮g{>}jy*W8swok=JivLOONm# 9t%; mBBH&(R$,ZI]< F.MxϣC  ץd[< E gAjs:ђEuNt"A#h?iŬvr8.^ ^ B7쮢OˏmZwDE[^߶%,OF ٷE{=.luοQ>z*˃"wv{k܆ >.n1x|3W Ey~ǃ3N7B\Q~0?:<Acխg?|gy3 {4I}ۅV㏥9䱧 UcA<ᒒ0YGD|VɄϦ7G]A6駤exuS ]'|~uM#u,8 ]9̿qOׯF8o߯ŸtDU/QɖR:7*aWRhZz>!p=pXLkwo1/B:.Yr tn:;{N~kb!\hs(BXxt1l ~s%leJ>#hAFk=4`H- iUdک(Ơr EbiKhU^4-Hod@YS\$΅ c39CŐ|R}r5;]x\쮰 qz7zpp3?[󾴻ە-XbRU dT" _LV-RL6NY':BQ\:| %ZIJ Ԩ$Ž[|?鳾vx0&U:tI')1;n ;ӷt5La5s荥fT*&g`z P 93*Zw 4j4uh,O)y>gll4_|1sr=9S1(׼,T%j4[@N " $)#S&o'm877` h7I:/pwޗ 45-E'谖. e,L.n4Q&WçL.5lamrV$w@h2e磷ghNqK~r9_͟/\N^n'q7gɱg|km*wtYd۴U@EEm0QOt9tړRB*l9ɽ-=6VjEgbjJR:堭1NTCQg̅tۂVe^{gUGB1njs;/U{[yӿ6̓CqzǶG'ƮmFgnTwi=5Fo[!oɬvtZ{rZڈ/{KȂM=۽\WI-o*f8'pUN$O6|uO*Z;!bh0EI X\iK{Zd yijĺ'I[d-m*E%;)C AK+JP*ۤowp^>ӝ%VF|&X!em.9) & Ft(Y®JΧ'5utr:A|U:j-T1DI"JofH%YDH]BU̒:ۘ4EwF[pZ &^z/Z%Ub"tmСضt1tQxBfnAsM w0 \-QF'ZHĒζJ KKF?ـjk }AIy[J Y F ::dN&e%(hkmH_?{x~TMrF`5~Tˌ(R!)JOY$EICǀ%j]SU*PAMr jeld=UM|ꧫVy^<(gO{ѣۇ_O?Uߝ *D>]%jThjPbe*H:l[ͧ9(ړrRp!!(a%d ]$hsd[inw ]20U@r#je6XS.C>Rr&s9x'SA N9[͍*{02'#':Nj ЉR1*Jr9#\p D, eP\+M%z4dW$ *[I/zyD$$FH%JfcÂ97 *jHs}m _K/QI/ϛL<ÒToZĝxr\ _v9wCOfwsBbϵ5!ct) QRxO٤ &I$jOB־|ښRڔ%z0hIQJ=ڜLk9W3)5ccl/ViL6wԅ5 ] w g_d|͞(NFIz?z+ ֑[&I<9YMgc] %# 8)!G0Q9N79& 4 I0MM4$#d1RN爙Ec!97k0%solSqW͛ڊZE8H7Σ[$lJdRY(dJV3qRL> DeƓ)d!2Jq5,jFMt]!eX$o$Ncܬk bU#5h5b7Zn>->EQc؞`Rr\ѫJqS\d6iS ]ֆd@gB jH1%-Ho8Ēa9[$Q5}u6%wՋ gUՋ^U9v3SAAr?$ ") xP2V/B/>l;U;PaC]xV7z6SYQL>F?>R@яRBIwD~7WU#&Ӛ`d< c*2:7wQn>;Nw)ޑQ#ۋxG% K,4E0SiclCˌ.#H`Jsne:{e+Ut[t: JDK++vJ0LZtS zyqo}|v>lKs.|>& )Na͂Ia~ǶymFW$2iU@vhVg!h xr0$+oPdϥNZ [w[{2EwNzUj|8|ԯ; R A&A€ʀYr,q.qZxDnrx,vi`پ-puK1Ѯ~*;gmvc >NΆ{4ydZ˔BE.V:a }ȨSp$Ovo3hןm<mk< Ý;?J|_ddvtgCFU&[[A)CK='=|zlFaнP╾#-DQ/# FY`\ ,>hE F+oM 4] %"fIZ 56{FϠ8צmRI4 N׼+= Ztn'*EL,b,9I8ؤ UE!» {[,_&^sYi8ƔX!0hD>[l|lѓ H-HV' x}do­֭ <IEBd20Lj^#O9XEd3)TϨn(N&8-DL%[<hLJ{ޛ!G9ÒHL,q@ ޜtIҍǵOr\B>|v~;}Pe׊d:K4/fM5"ez$p0pĭZѐW=)>!~AkrsxR5йL֝I^?k7ϓ;o//^-8f휘3ϽJ% ZNjqy~ `lH#Z=ٺaa[;2'AL:fbYlG1gX;`G=Q=.uT82"`K#_Sxh#یxV԰^8v~ٝ9!鏳XPZ0tc.-:z$ts]y)+VGHүezY)|9>SG-8z\pFP&e- -c!g#3t1hI;yG1 ,$O 7)%V^BeS"{>Qc H'|gPЍjyK1ԯa\9* ݓ5'5>{hL3! :5 Lf]9Y0sVJK4ZчVx)zJ FÚ)MnkC8WP+gmDzo6 |2hY#1TFnLP\fiOt4]# "*s+/x/r8~?j-1t$5eI IrV/Wșz/gG=E\#t,oV{$tId3c2iy>{sա SmxOK_IG;ꤴf/jv3YK_ =:~zۙ,iHLjCm 0><7v.9Ő()Ng73&RcITB)UDqeC9Â0 [˷Y~iz2U`VB0&!:p &Q Ŕ1^4--XIٍ ŤݛѷU >);]foz ousDn$mTRZչ/vKvt@dI~kZWԺ~Һ=MgtC67 ͳYwH-+j9+ݴ}Zϋ7w>5Vyf~jxmx~Ws Osd}ͦO!~Κǟ56z+oܰh^&KmGr%:@7n>yt"4`{IWyJU* +2RF-uϖs5gq ~bsмgf_' c!q1@YrlsoyA1U<(p1gi5צy"Q s]fv `8;z.: RjF4O> ߟ,o**qȚK}G4F`J\jpsb΂&P#I#:Ck4'{Y`o֫bn? Dԭ@QӖ]7L,g{"KM("F׻Ӹ\{Fp<&R<9sℵ''pQrxܪ۫dk=]Ƿg8/zzT'akOG@#?$;XL\-_^;7uo)XB(cܸYN+Urwx|qIlGGgF!=ɯy`xrxڝ(c.]ͺµn=3/yǣ9/3q<6y}v>hve.0I>$w*%MZg9«5%jIoj5Cz$#kƜLICAF*?\Fpz,S:#%{wv:t#q_rճְ./]oQeq'%ڵN./.-4o+Xd"."% b==m}߾qH`GDG>ZMcŘ5h_-ELu@jѼ_1Gbkiwim!hcfXuejE֫a{\Ȃfu9+ۄ) ,ĠsJdR[$!VeQ0Gdjy-'iiCﴧ g\ CILZ%YZZ@+h7C:ZJH!+Ǒgdd}22f9לPY}JzP /3Da#ph򛶓,Ɨduk*9;5zAeP*q,8k JuV;c *RҐʌc9jdVΎۚv0/83@Q5`yh"H+3 &F'kr6(K:ڪ;h '0ݩirLИͳ2lR$e=k #{~ɇ^tQ^%BL1̆:@@fי;|kr`] ETM!lod(ِq ߯.9p C&IgLDS}l4!^ g<<[cXPl3X%PL:9  z={݅յ8j/@`VUs]A sg"Xv-9Ҙ,L2\/)L@*g2jA(7?_X"_+p.9MÖ́Ïxi6}wOvu qޒ`˂ d-# c58)؋ŔfFDtHČ4GّKC犛.?.y,/'N,u?Ft+rL̰r㌎:J5XB'MiYj!Q^I'[T컛+}_;BjXg>—ThNلr,qjT ]$DPu%ʙ`.u՝b4;w=p*'i*̕ V^y[e{ld-KEU+(p^Brѧ-g^L ί)kJ]]YU,lo6(h;f]}1t} d;MWۡg nRw}oAW|],'˺GtUcբ/tU>utUPZ[yEoռ/tUК[WR3JX `5\u ʮtYJ1E+, ]\ Z]C+dRi#"Z5JO J3!ҕf]`3,pEo֮ ZźNW%kGA8b%2X# ):jzK O j|2>=hyx4.UI* O/yz}w^r혁!fmWcV2Oj%` w*Y.ګC'?m'_#U UB_Ig.dMp9.ޝ E~_S۲-S:)h)\P>DS$r7}%Fg_U*h:]frJ<%xv 0瞙[Uϵ/Jݢ+]v2P]\kBWr:]rC+΁[#"B \몠Eu*(5tut%Zݧɠ1 ]7tU>WƒP*6ҕ%`GtEٝ䶃+L_誠}ķCi@WHW !M67tUZ"ntx_2jXwy ;Ii)3Zy9x\jLLˎV;^yx{ۻw(wtFBÔj%PBW*UY>$r_/oZ'MeH*Rwt/7O;?NW cg巺|xOq5O$,E#*}VWmUDp x܅ Of#Ƙc~A%nҫGܥV(ܪp#)q$˞_h0UWwklIJCmDTI/HP0GevSpvqȋ\ݡogTu<'|}*1WSBfI>#VrfWxmr\l+)z+Q0 *q1yovͧywR|Fn G߻yQQ5&<_pC,dV&&R3oSW0T!QGV)bF`vR,FϙI1 ,ͦH huF4,rG>si~1v!]23ڸ<(A5AY[OR%w L~jÉORhϛ!@Ѫ%%KY+s,PDm5,#e qhZt_qM> Tu q$3tu9 ښs|Cr`nZKSKŵUme )@Qsd_J' ĕGMF@,ʍ`IA:.0E jCYfv ]:49AHRō`.,0ȜN##iw h uL4DmK $p4%F2֨MR{[xL9YQ6El EzjR&ElqťeB*,QOQGc,MI/3D2TAM,pDt+3a1M`IkU6(5$mHJ  A2&dP _ DB5%M%$`AW+dWVb@,AnTzCZq2Fnͷ6Q} @ mH7cEfʨ2֜An :M;"q b004KPI!0ά :P%@ `-. Fd&G_ bUPP;)ti,K}kAQ 2DBY'@PSQzT]T[jF J H/[.! cFӘy}W͐?5[Mr^KYJDA >&dY\D d`niXEjE},ZIۅ །_NbF\*"1MDr|T1&PT1yQHa:U&Y;vۙxa(gWv1m!:srM-x FjVef ELKIJ%@uP< *}Yd :!Jru IJ2Ba1%OpHv9ok=/W(39՜ HDNW2^i C.nkU߫jPJ4%wV4flqR(jS,Ԛc-6_tXIwHڳF& h%A J"-GTjj5VE*q+XHhZ6H*tM>+>M-)K `lmRjh\cs|dN{=zMW:PCX6\b$*ukE!vކ`^c6ՔWLKC$`PrAvƒf5ԤU I贝Pb}'t\- 5uVW4"坩y0B.8!&[h#^ܼ\[a)۪P0 "4mLCAhEƻ-goQ|tW q0*!jB>l~vtXzOr~+ ߮R Z,d ɫ"oRlj}qk^ޫ%U"#@֕by ~ytW *Y{r=a+C٘4L7M{r |xu_Zq&>kFZ[B|+|w]̖znN `GBC}q6aTtĨ[x('((ggήjSR(qh.ՠO;A({:^؍%/d@Ο%ư=@p.vDWꆮ"ubtE(=W2zsL?=_vnhCx e3;@Wc^N> +\/tEh;]ʍpJi+#"wCWs+BY]$]]^  @]@d-) ayjyqڋ_˳/q5Z|v?_G'wkǞgeaxŸOώ>v1}ϽGERi5D4rcRwLW꬞j񼭮x;o;Y,Zא'GGד<4uפU gAvEG?h= toӻ>ޱ{rڈzpvtEAOTrvp"4w /mt^J^74UϏ߬ccVO_ ޘn35׮чW ~/>e~Z(cGb{ՍXzkVǹ5B]k ({CJDW8n p~tF!U AȞ@gɖІ٫+By CWCQYOZ2]X=GZHK3{ދہF]`+t/tFJE鈮9w}/t~tE(vЕZ ]x8]ʍH0]]YT쉮XEW7tCW@k]J홮 "ϝ2 ~t奌uDW8sF ]ys+BQ2]]fꎲT0 iN?=;j a;jv/= (Sc.Pfܥ򤰝"D~Tϓ}GRãô~nhE$R8JZ`Kp酮m}N(#~sP?w'V> duZ'i]d7jf@Wc^*ꈮ; ]u[Np"V3] ]) ظn ]Z >eLWIWAmu|Nî,F?NO/_x..), ^Yt/>yO=:?q8S?dmE Mk;|^d哲S<(qkYaO˴iqo?\=-vLWRqV)!rmUi,<ℲYe>Yk" e_->9]}EK> 4߽=/^._všw97<H~=kǾ>EoquZ%{dтOˊOrݢˋWҿ }s^"2v|ro]n?ewޕ6#"Nڼ4C?,1Յ`YmYrIo0uXuLrUPdq| 2N]~-iHA+mMݼfD/4v0vX.y`2ˑi7.Y ɤ?[+;g޻/0݌3aw3iy)6ϒ7fw~i;U_R2e~LOIxw>7͝:c }rKRy^pRp*$p!g*Abp.Ib&>@X%-Y(_"Mx?KDW i{yx;l^͏Aj!`@9%->]7v%AT'^oiVѳ *oëp!< N{wdYiH &<3,A#ԯ^~oOT! RsV1!\wT[Cz 8ˣQV;(uL7R;2L̵2r-+spqVW1Ûv̓`_VCz߳,Bf߬?7Uw+A=ʉ/=[dxwv}WnOyWq\]@z-l`kƶCZ$RMJCBV僐,4IWQRY4F/o ¸hB Hxp"M8_Ijgoz^J=XiE9,ՌsWA*& # j̲*#'<5RjX:l2:JSH^xI&6tc짣''BNR7%%NN":~(Ȓ1h!@vܠ 7DP d9cV(bU)2$DUdYr }:\8#A"Ĉ8>3W~V#g?l8܌G|',͸7'6lZ:N/gm6ѭM|\ u{"SȬN&-AkR(OGu.ߑ8G<уw9 [+#= UWj8;h4[<{oV:qŧh=~EYg)9oМBoe[wV)JӝB#n떊ӹkZ0bn43^~99%qcC Ac"#!N;( /'YQ2TvU@&,"G\x[U@tYI 2z8,8$0fLyCJd>!hS`* H5rU@/ܔOKX̏.,d2L$"f`@V Yϑq#xJEUr*kINXCXḍOǣ?]Duy$/H*6 TA)i=Kyd҄Uec@C.xS8oЭ[6yYj6m4>-q}&ٲcWy spz,# t`17ͼ-+ڕXYFw51d03'㧖oEs 6e4 <(\xA>R^qeL6jtT݄]70|2:i]PYNLH(E1 ^;!9ZN66zR?( iOl~a~q5jцTSj\4Pb %IAjsrqڧO)*(a%@.4 9G ;.Y|VqP#{LEFg p2Cp\)"\}It06y9lT'1=c߲įlY{J~N(M>fȸG:#\pG47ڐ\7% @0"" ~k^~%"*X!P beAF~AeS3&XJ9oJF5>92McH8z<:_*9[KR>v׵;VNޞ'<,X{sg?Ϲ'GׄQP̤KA hD)rР!iJfDƓh0ݸ`r[1)K.H`ڦH䬼ds|&%U[3V#ujX.GB*BgՅKئo&ivݥ?92X6OAѧA25VƘȁ $&dUNJsVبEPu5MCq|L (jS F%+1pY(R䈙E E~Fôb jm^Wkkix+Wu$)G-B#Ӛ`Q jgm'Q\LȐ+򢣤fE#P@4IGR։ErGMU#g>l;',T4b58V#Qtӈ;=7sS6P(E1blOJ).uJeB'-Q.yjD1cq.eJgB 4JɓJojeXv]"}@zq̅bոXXyK^.!#h9J(t`*`L)#VTXC]Y8 *lyz&_&SnmhX˧iT'uُ蜓j}W׏B$f27C@Cl)p&,BbO*IxT61/zRGV?/R؍!H,tk#0r2sI@zB *t䖈11%e1S)U@8%iO.^VȽews]9#gdxЍfG=ݰE5K|_cbKC_K4BĮs@Y5s OK2IV9zd|hW[V)Jrk(v|7(n3¾KծZ(R7(n L,K`rq\h^0-g?[=޻6+, BlƋ`6_}h!GMTx$/Oȵ >[e$(0&ؤSQF (~%%=ڲ?pكDl ـcw m$3i<>YRHfV'㸰e-b|W6΋q~{aCQ&1H 9JqYDe%e1^_ITl6~#F|ߛOˬTZ>XtdB9@I-C ^mؽw-~OKY.tV'aDֲVLF N r7^_Pt_ekqFNհY]tFp5~:E>vzEn3 Uy=e~-X뭂Ɉ$=u {}s*7vߞMjݎnfwwO9tfw`gZ6r1uxQw>/Rw@+-wCQݧݏ?h͚x;$ =,mMc%e6^t%f_% uWJ P Y"x5=KrW:ZOlBWB,!H\=2_kR>Res2Mr^5% L*8#2~Х 8V)hB~|+ol/l@% c2!`rL!,9Lr1֣ aB 1JBT*[ڀhY&'eW=H g*/lT#q!CKkN)ȣ_13Ǜ Us~,6>3M^yru_.k]NG'\ ss#3m#ۿB˶E⦛`4S,pwcY[g(Ryl ։<k]:䘲# GH\q:Jk]F U6EA GNKrl8VPpۅXoT3O'<Mޝnqp5p|7˯' xtitr?߇߃I]˳͋)m\|2LS8)ѧ͐ #ާYz~\@t \+6~14⨒*[fZ;]p6ZPcʛՀXvOY7 "<-#b%/~T5K^&Ŕ|xzx ^à1ʰ PR .iJ5[ 8lO  2 &yQ HCh:o 0.̴0/eh9RDW?+bֺZ4S"rﵦFkCB->[^,dqaA)OJ3Uϫ>&@ǵ1\c39}3^Ŀ"ViVG]< oJT Z+mؘ w BmZ ߐ{uqtT;~F\O1Hu2dfJ"w1:ѤN4M92-q'iőp$Scu:muxRNt6jM \ۿ [ϫ\Bm.UqPJ%+T ,uLC?M8S~«2MwBS*)iK,+q}ߏK$զLmKi({'X'EsctS\ R!'qh܃&XѼ}!cFY; ZDl `0CJf/h.8$T|Um}.:1Voo}˷mrA߷b/nҪ7+yU%WH_a|Wmo$-ץ#O>"G+ 6ZY2pie+b6NmͭDor0n0BՅ{'RхG٢n|^\Z)F3r'G% 블ﶯ7c bw 0 Qy?bdz.>k~82Tb !1:s[%/d_z51xjȊS!a|T*acq|xrΫp]FJ-93_zޮ8H̦ g 5+a׮+[_ˬ"aţR}p?o6z\w֛nz9֮JU\w%BNc=44R{y8viW,'xb8_c0ZNW'' :Û㗇?z~=?<:D8<>z_``[ 5oa;+b[,MQη7|uIW^Uz*" J?̾<zΛԛ#,<-c3o zS\*RC4^w+@MֈW.sQ*;ܡ$$CyfRAP(A#=I?yhJdpNr䀧jv``zZ(& KU q^GHaRkw\Upb3|;z7\zsZՅ.i3*l2`%a`n8 wr+DH<~>{2ϹMC=y W^,V/l8>+; C OR 8\Mq߽V ڦ~$do ԼNC K68 U28 ٟ1`n>yZmp#*gp: ^׏47y-OhwP oKϪCb>A}]v4 4[t6L(߾.B6.+"^Zt+lϽ!]+3}?=(,^xgSb!t0% Qz!EUllO۝`.0 j5… D¥℁E|rb%s7LZ鄒1 &*P j4!2aO9 xR5S\Y.pn6mSTdc/X\W`w|GSg+Q[WEA;!¥dCB-pM9R$LEH$X 9!frAPn|3#J#K+f+9*(V2heюkѬtTl$ƀ+&-5# NĎTHETB@43egK:k_]DۭIX(cB0dWC|N$ Vg7~"RVCzȷ _))r Hb(6N gF`:)M%K a(^rÜ%Q [ZcI"!˺{JuW;][NEm gс^C1FDFKsE= ҅r2QmR@`Q0и rbrTј;!87sPRw XJ_fnqtA^;?>}t)Nch(z!ة>&(-=.%- A+XRuiaZ_ᱫcqQJ֨\Eg\E#` 8=@_>>-޵5q#Sr*#n\TMVrvSl\JhRNj6D] 5Af0ƠC7З|4(/xi$`_}'Ο~ܨ;Ӧ!!v릯R}iR0a^I;0tf3D~Mȋ{uKDzt7ngq^lS-.5oX %" WGآJc_Rɯ~[߲LPx(mQ-U|3_,M%?SAjW_^QhCKQ&PpM$CDLeGj]l:CY%&`LokJ_n{رh5;1N>;;Ϣ19 w*fgUV wTJ=ճ+{ǩ{pu?<5ui^ԂQzvbWvmDCpU fg઒kqW ;\U*•TdӶlj.:l7[_Uÿ\*/oͽ]נ~?o"ee rFiEŵƞ_=`\J,*S7'dO?~B[xIVB+FWw! Xswsw e$`Os@}19Yf ꋥ#0G|vQA7x ]SAhRAt߯45?/)J!״EXz|hh~80iAz_HgO9h y{+`\w{]l !guAggY/Oχy.N?S_Z]ܜۅza_U>,b>;yYVM9=}Iamݒ]...ܩpڭI|r3lRT^2ֺ" ZdTx/lMgPKcyYQMuJҖɔhYJ]'팜%Z=t:/vY 6EkQ`W X UйeliܜtR`٢L1|yJC){D{ڒQP!*YRY+1`3E6˖Pz a nk&$cXS H 8T6"Vi/-%9]sG MTq6;YXA33XedpV@l7k9cBdVzөPc#fdIAE,X睲Bhl(MO)ĆX@:#pSKXKbbU0*`L:%dfx*TKcPC8c S6o-t< MkYr1kQfU-U*DeHW?Ȕ׉W66tޗF1H ~K6Cݗi{fؘ{9ezxp*G*"Fl'6~1,T"56aSv= #tKyNGnibK[cK/Lzw&nXJT*v-{*6)~y\܃Ny\eO͐vAY!gsF<ЬS!zY0eRd ФF4Yq@53SO]?2hz϶aT'R_4CS0 Τە"N("JC BDb8^Ͳw䍫;=6ޡkb(%{D޸޸зGD?nEˋuh&H0,9ꂫt-LO҈J&RvQ m8(.@2#'St& ¢&,$t9cY 2žM 5DSV(tL!/e-)K*g$UWB$\:\ I^*%ddp*(JhC)СX *2$6nV6knPW+$t“!->d8t[9 +)G sѭ/*z0rI e3ZRY93,AEo#dk0wmfܰ4^Y-0J)䢊RSw!n2W Rq&-f5mlIK4X;FȹCSgRl#5​7q|@TB`kGz,ـ(k^Vgl @qǶxHᖎÏagA5.󶡔Pm(+ djwsQ[$Wӣ8k6'=+~_tW|}kNi*@ZE Ypʃ%UlrMp@GAK6a'eBDW`:֓B cu ÆOVZDF;0ȤĔ"$)jt1J d 5J0d9p:#v`!z<[#'XxzPېb*)7 jL~oVg<: W(.UJ;!(B@!P@<t~=<7䉘 %du,"Xa-HDlXU &t! (k~9)A{tH^J) qMunlGs:|M?|7O=]ݱ1 MO^Ma{ݓӏ1=z/z?ߌy2}ʿ8_Kw2ſ #_^59c 쑔jJƟψ^_y%fZuuIm]U/>yrON.Xrʏ߶ bd>Oά70"?]Ϟ/yNt~:O\| X#_59]yߟʰNP5oǔdpO ^+6hy(X= ԯ7ٻ6$UƞɞKrA6gKBJ^+W=CҐ4Ք6k;iMTUWY77p ;̜F7)5b>i]!D탤qM֚wnޅ8#A3S^6Ķ/]xޏ<6ᚸ_JGiV?4GWwG,=6 jTX0S{&1CPԘK7?:fo2d{&Isk30:7|cv<1g LglbfyupnLԺ* G|2u#-922%<7Ԯfڝ3 :nQ|pfĺpNEDŽKc  iTz! hv&o%D Q(́!US2/{Ჸ)f}JC)'2)+YLZ٣B?N' zQ,0R *a,Yz?:) |r/m:ߐu!|#Y$A$?54ۥUK!:=sAs,xD$𲥔c6R;)P 6PV%i } F9MQTR)rI8w}yBYub,^o5O\O ,cuԠu3UA~6i9:*[WO^W"Z9lE\n~a>!;y2ܺg}rជEipgX X|^Ӣ#>kюzj.w\Jy{0eϛKq{.˞&hR<&<2bV:.yR𠢷X$9;o3m]65Xڃ L&TL&LP.h:y[dL%\L +ep\tۀ٧W~{woɾxw,n [)W2:H: v{yn;9]@s~7Xyf`6w^ c&JeLB>=պ|a 46Cl5˧5X '])ȇ]kBIϜ)C%2۠/Ġiަ6:J.y_2lଠ+|{oɾkS*kO%oDi,.UR\%&iT f?5{YU-wD< {Ic(Q9j]CDD! ʴ-KPjdboI=#"Q2Djj46S.yg+*SpP1#Dƅ4 bI$ r[WJZ(nQ(c{5L#䄐/a4Q_ݾAOBiy|5M'N֛x: ['paT/x]7WgM*+ H09:~yB/.(*~.9OݠY(YQ OMb>9Q$pב=؃_jGݳ'w5 } f05Öo#,)ӄ(#D6://!3i3NC(ws29, ưrNbѽszr[wϚI6n7u::[7S{ndɎK߹-!r-i\\6k67ʭ Rwl2ltu\{ 7c-'/hK>yiz~ksQWyϣB.egMŻ;]z592sl_m.Nwߣm{wBPDuzue?<64g# PcXzUV _yGZ C5Ye=)OlOuw嫨h@X4) %\ RH OK5l:(BT.& pփ˘(^ /Ҝ UP_)r6fڧ2 ߸ SS!;u<Ϧ$A~I9A AreXf΅4N [*1UD$ArInuA$~UWK˙񀛣ɭP&RaEFYIbFF5^:E-*g(Ŏkƙ"#5"hܮ̀6Wv ,&e|>N9[Y9%aTݛ'^!B4lJ ^*qhX&Ԧ޹$BkR@x"G#Q[gAШ&Mk-D_C F'%%QJ0DZyV(mBVYU j>}2z%Dg>p t}A:!\@Z8("W5RC,#,5J{!CEpFƒ:Ft>i8SbAyD\}/'6EȮ' _\g *3LIEyQGP;'*I)3 cqd}btҡF'h/w&Cz<> KXp$M)؄#Z5/5X]#y^ %jDbJ8+aM߫FP,Ƌ%MV*£UNJ8$Gı>!H[N+h)`uzF3a ((}U"i0@=8/#w`-[Ќ yD siY-d85>KȮsljwaռnV~ߞY*YJ<!9(%j%.IDlpJ wCa2樫}ՠ/@P[>MW:R۪-޵6r#ۿ" be ]$@{M >ڑ%G=\ba!ɒ۶zL[fYduZ0-JT֚AŶ"Ӈ:]cLh̐EtE5U+Z鬠骠T#*n"V$cﮗ37j2C`3)Θzh:{4$濑=.;jDw5ꊲ{7~uAzOD'd 3i] Аw$ag2K>˒%j:wϫxzQ8F a2gyǜ`jU؛ CNz}oyI6_jZ9qTМ ߽#?X}!AjU0L45 7lY}*!YUYsUf+]3H[iL0~ ޔ.pn&Ce6Fh%[DWشs/pm ]e#+k6+lY{L4+9 n+Ħ'Xķ_ /ײJFj7ԕ݁lGW6=`e[!\]W "̡Qwtutp[DWҭר J ]ش`*hA:]]!]In"VЕd+h ]ZVS;4:FRj]-RWEꪠ=|uUP"ttutaEtE U j/h swU3&@24ڡX̽΢۴ޅq<\JC~ 4[q&{4yJ-x+0^9l\gp05'WFw2EYTFzPS>=NqepgG-WDZ*qf+/2Hda҂|QZJ Lm?e?7ۥ;wٯNu[pN:?,{'Ua3}Fm\'Dkz :-C2\?Bn[֦f i ]\@"Qϧ+BU Ӛ 9Ҿdnzcu#W=!]V5t#;Ft=>MW] }.]Z \`s:]]%]UllkBWK[骠#+!$69Xj*pۣ骠\Yx B00*pxhA5sg'k1֘Vg2frXU%32z]\w0'ͼɆ4r_G%Z5|&s@Z[ޖ1,L,~{RV~A>93addJF(mB]$OQ'F)'eFet1{!ʘL0Rl=DNQq%]&j*qac'J0ڎ _ JݦWE>iv>Xb'70L F_glYC7 0Q51hՠYN79&elaDF s J^a@o&ߎ6!eihW\̃QMYǾ͛eyykw vka9&u \&*Jrгr&\bRVӚ'mLs.@J4g&9Lʐ#yA[3Ă&(Qtedꤝkk~t~L}fDcčHù1+eBg ۑ,B "8cYQY,2bIȔFSYi.Fȅg|`RIq- ܏_–Qa5LE=Fn l2g10Q}~h%*=-~{bۃ8[z? Y`oSS=RkR, j(Pg*&VF$-M;ybC ))/3w.[˲'p ϙ 9|D0YؤX(]N.6-,HBbNj7u hKd|||RtYF%[;aܻ_ՆQݼ14#C&$4%$K=p3J/mp:Zcu}8`~O}}7, ͳV0MtaJ!o|:p(̗/Tx :k*)?MsɄ+댩|2 iI ݦG;yeRs}^g׬$Z;$5 Fi2Ŕ"=c \+Ǎ9o216{1Di,qdzIڨ" (k ozCaSp+t@ OKG*z_9c~\G_>ߴk"7V=0:z#uh{1utQNZ]7 R(_+R|$*h[2vGo+|,@*K_0s"sAQi %oQq5Q$P"LD4`%&JM)@I.P%aq8q\旾Q4~\Ѝjjct=eÒg' }kk |C^l Dd+ANZkpʈJzpF|ys84NÚ=᫏l;*7oJR9 hBJTIY!!WV3`yl3q0g^f5id zlT[yJLQsCn:3/䂒iAPNE' #XL_g/:,d_Ae9+39#?Vdv'lHu^\د5/:kP6 /?q ;y|=>jU}(;wך%e=cނm2[KU5.~6݆}c[rIOWt]t5{zs9-.~׬ '+zrqtӋy|Y8Pʓa<m(oqws$mbO:N@wezC4_<53]g^s\ϗ98ڍ]t3mo4&z͑E*pQ%GVAء*(5t90GQ 40p^.C/6Lx9e<*ZU9 U$ Ve#]&0v*sKV罫aDpun|\Ԣ?5i/rVM …|L*0.bh&Y֪e!w%19rQ)eFkl0,?]yoH*Dٙ0.~M'K٦)zk<+߶8FpA Z/Ye{«>0yW- o$-ץ#|"G+ 6ZY2pie+b6憭P֍Sen\g(.{ Ϸu?nnS>ȝl{ROWysZʄI&a^!X&HSE57V"FMsFV;|>^I~pvn9ݽt2_ v&0SJTvY_Kΰ2Z]ʨTɌՁrg09o6tK4aJDߝ{l'Zۚލ֘;9 uJ‚vx+7JZ¨2h `Nr" 3z(Q0!S&1jveS7 ˛Tȳ̎lli3̵id6,'Jz` =IôƜhNoתeڢSߺt[Y.6;?Eg~˭N%V\8xǂ2‚\a<b䞁(bN$!F[-F`)ɗxu6եU|;.)9IERC(J360  AyX X1TrևAogY "[XaFQj*9D:8A8F!LW8jE^r>piA"bH`ځG xZB#p@ /BF"PF(X#TK-j ֻ`DdJ04(B Xx%kd  Vη.64|hJEd$Arm%OqU~11hX~Rpa22.z|쫮Ls RRxClyO.ea$}&{ ABKAAܞL©MOB_o%P(ơ$siatH\ ޜ\/*(qQLOfh|)ߕjs8/F7^%xOU`R*1IJq﹭F2/=q<q'JU3գ^_+7ՇٙWՉgӓepsj;>[vA6x{6 .l+p1yޯ1ZNWo*'ݬ7.\oto^>eyo?:zuWOa ip0%Aߚ"[pe>{eafjSSͼ$#h}As J?oϾ>K yMM6[,-m3o.AOg zշTHJSN}B`݀@oDE4q#;i3u~r;h@ـT5E;hȌQ;ES ,tphT^YIthy/Gt6H`gX`4]bpu4 Lxu*7OۻG#N1@>;:U]P|f57)M |s <^rÜ%Q [ZcI"!˺ U!J4)N{ ܩm3, EǙ! .1X#"%98B#˘0jF-GhKّi?>-—Ev>Y[)X:K˝ ~&㯱-J8QbmKm`V:7%Woh/Q|xl)CLCkyNbXwa}+}UP{Lv#dCH00UOV_4J3RNJTNwVM|&)D:={[7wqYzm>hm=]\u}r_RoZ,>[;yszI~ЯN>n0r?7j4|iˇqϮhAm(@}'4Y1oNړ $Rq$uvVVp"j8 oo[%Cje)b4MʬjXUW6i :+VGԌKd(BRէ[E(p!Vf@➁ڞl9*>竾fCQzݶ0RoLwzVϔԫc v t "biCO[ɖYɻ i}5ވfwW[k㋉mN҂SZUQV9Ue95|{AR96f@yd]ȎYR=.{U$z]JI"倒DAG&j6x6\J$E |FUv%' jbӬAm.4J'e3~JUߢyJc*0'" L5T!\5HNA=߰_B&ki"W&t Ų*Xk ׎#L2%UmY%mN|Ic1h%Bc' C5>tVNVrj ަ N, Z 5nHH.: MEJXClm:'A8測Nӊ1%i 9J):}J:nd ή|X^4u䱘$W 1(R|jY(S%WA`brP8΢pǧzdz<vJ%ᐫ#hB_ꪖ w H^ D>bt3%iD.owncc~ +M?ҭaY3ܞ8{ϐ` ٻoz#ɐ!vȐ s=xpM8qtᢛ ]u̇NW3]@$ ѕ'CW7SV^]YLW/{8!^5<:J7=wzʘ jt`LW_ mݝPqEWv3]ẉzJW 8]]T誣%tQ:ҕ0%ꀙ&CWnЕECrLW/4!Rv2t{}'a_gwCi`%ҕj+TbLW/ti73 b(7X`Ml4ufG2 WA+٨Ƕmö6+甆fMg\1^Y/R ?,6 C|P !Xj LKFZ J/ P3Brdԗ Y?dG;Z.;Jg|@ nJ 1+O:ZCχC =~f;N N"|v{6g7nOˣw+CDI=:`Up*tCLW/,!])`08p},u7ttQ0 +d R0(XtdUGPt銘L)Td0hUct7Vct'+fPk7DT誣Qt- M0 ]uVBW-ӡUGyhUfzo;0Y^B^Nփ;ȔTpNFUvtkv%L/)#w<*lf! `2evSdzN,~^}ݯYq*ո]4x^__'9z"/KZOЫgyU+Iߏ_^)]{, l31כFwĘjy}=مB081e֗%CVϊ1]OEd_8[|!5~ܗrn];k]U\ Ū5^MH-_k(e}džKi~nfy@}X/nP~c>-[0 G ~V7ޣo?{9Cg0? ޞ!Ny}F,\(xYRtχ .Fu 5Z&9"ZV Pl&UH ;nN" I+|G>no.uruQxՁ^]3h3t.R?~K!jpMޤm񂎒K 0j\!({=_J)!y.M,Q>Tl.hE7|ֱ }?O ].|mAcn'oQC'`%k#iحh)+DkZ1FB:JJ@Tj3J -ŀ1pr>Yv`Hr>;9ywj-ƜK|c`<۸*d`Lz :=VcZjZK-}Bqάجu֬qvAS UB)F|cѠLZ,eKw%k ^6y=AdZBJMJƨߥ߁HUZ1ķ䝶q6:HmbTVPUAUm|3OE5jCR0G5Q%JR-*Om]rw>Ld-]r#%io5 :R]qi:*BhNǺxhPuUydQG5}Qp+X^'qmlXJKeJ*f,Yؗw m*)cmv[O  V5W +ʊle65`5L:laLJVG"":W&UV_*)XT5;jN&[ Hq=PT Ƞz%(sJ5UowIͩd\Uka:߲(j`cEV)KLUќihQsDɐt&[S^ O%V#nTnR1J DW9 N#)5 UgLBQ D,G9U`Mg& `gJ{8_i0mվ̂o0$$j'Hy EѶ(R& ťu.TS@M 4Ù@(qHseԸƂtdH^Q@$6$( ^i;+QagZëqߓ.HThQ4"/U3_Mr^KYvBD jJc`V.2@b0۽@VQ=j+`"Кe8M d36/GnF1/EV̺(NcB*&6/ A; B mB0|~m/ bj-klFd nPA׮D="[Ȼ ئHc[o(yiw>m9@VZLAG= ]I4Z*#``!&SAyCs ~փ|I A9J#HV **2MʴJwk@`H ' ZSti jaӈP/zCL iDF9r'KD9)T5]A=u@6\b&BUA$\0 wmM곩\4rUf\. LDC1 4&AC`$9Dݩ KO]0p$D LBFt6b =];S9зV/jn׽yX]a)۪@d6aHա5iH-~4ʖXЋu~4Fu3ip旐hQ90ѺISߖ_.8W0m>&.~Mыr+\Ζ'RIh>k ..VH+Nۗ{gdusTYKm[N3xxS $}'uХRA?Qȟ qHJ(+a>Zxr,qVV~@@wJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%׫ FxHJ`Ik`'+`%STE*jb%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=Y%PԳr@J 2XQzߕ@dezJ C;X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@z@J#{8J (`mP{@OR d@+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@OG akw dُsj5vs7pߥ?*"yPG`Gs8fK2h.={SDǦ+a衤+k,#ľ+X3%]{N=ȉ`_<]}N/>Ϡb5]}~tqV=KW#ҕtЩV`(]h+27CIWrY"'=q&])kЇZ=]qzIo7vE{l>[l,/-h3G7ώ_cx|˭b^~ŏc{5ɞmn1/ 7$k.:"sG^}|6^o'H=>yF?ѫ8gH"'sp.+_rοU09sȮ`AH_=ǖcbNuKc/N[wZBin{v4 'B707<ʰqi¯>[?7ϷvR2-߽ԫJr~D{1Q&Jt.FUJ>̧/KI%BlxyoφqhHɗtr9!6ӜUKƆBFU3KMV-Z1rfgs/'6fn94/-.u\7 notn'+enW^C.Ջuug.X ^^jE{ۭ^ /6yǡ:,_nn흛&Cge@M|+W14%Wo?WPˎ[w_zpuEyKJ jVfi$4]P>Q8sw Jll8&I˶8{dݔ7Vl>|!Ǽvmp_?5gmdWgmU!^LMJkJM##SlkZi3S^7)+ Gt+"&z/7<4gwo 斕n2 H2N&WB& - h[cƆ8{/1<4:"Œ kIy6B0~dXd^kNt },g^K7jkF#k\ q[[߿Il j6]gg_!b\`Nm(dJt A&< :ήl6vF{tnJqކ{6hs~:fՄ;i׺m =4`F/%uPF)R(߳xrʎrYiOxn2ssg=Z4lyKgn'wa~ADe[^~{}YT^dh/[9o #?-gR&CS\Ϟ8l'y`" A- ^giob z'B;Z &>d sq ls >EDZ?z;aد'}-c"c=&"i+]ztR l\M-4풦S !I1M'3n8f %@D|?fǨqfDbuwnL\˻鴻{t2?.M[a)|K]+H iaa/k+zUJFڐAVuVkhIDzQzT4UO5U~7k=;^m7;ci޹ޫRtzy{鱯fk.ZsMIT:ɞXxVʉF)tHJ  rFǺOsw߈8>_,V}W潃'-7m')Fy `fE+ sֆ{+ lm ˻>]S&F\ŜzcԬ 12ÐC 1 v&66H? IlfйhQ=hnNQO'sk1`F::SLAYA,TJTm<b1$3d-xShv({USsM6ݧWFׂPvp@uRgm\r> 4tqnBy)'O 2|;/,KG9Vgft'لtJJR"bkp|`9}nmAfz 6$UC*c]Bl=Eu/{1 $}llmNt4%O&_lGnrypdu !usɔ$"3.o1N]ɵ']?:~-fknHόOGP/o~z2Pvx=#cmXCӤ8dBn=UN.ނ1;1Jǔ(XB@ZRbb#RQq'ԃԓ>m=jYtNN.QC5>Rhkbt+;Y{ڽ Crc{Ņ9N<.̯7goT uA]AOUp&x*h#"fmoPNСQ b Zю= +v"WuE3IŻ٬VzK]Yp2?vк.= X Ya*T`&GO9SZg:r%ؤ}aVd #m-W 6~8͟䢮.ޡ'4Mh& 9Mƫ_CI~۫_N>ζ&7kz*^/;Vl+ٶy#g2>lǣa/ڹB]Bfu 'إiGQwJGH4 U R똟N[-p; f;,lkfdSe!IP<ӈ LU/Ί@%Z)' 1&Ŧk)AigMM-#κ'ڪq}Nz&X/j-bݕV2ʻ ?B'z'ȞC8E{ 0DI2]:cLdp]vn-֒яwW=Ƅ ݘ__˿>vDZO~Y:C>[| B9 ;tvjKBv#) G=(4MX!xD$.lɠg_eWrz3*cA0xohηJw(9ޗw9Lz4#|"iyb [h Iջ*s. ]N5"W =߂ Y?l'Y^T R֋$jdn@ bLb"r4l['[Kj{*^ߺچX\0\,3) Ys&JˇiW&|'Hb-STLJEgFU,; UĆl)[plF6ɴ 쏥PtVA)IM )*ZQrH 6K)_P6гXeVzJYSYk oW*qQk;."d V/)w}%zuddn᪋BLThC`c1Uu&FWɆ} %mlll9; u-vs2hL>Q:)u"~Zg֠|4i1{[&3c[5^Ϟvv$u,#urcjy~_hiψbV<9JPQa꒑% 53/fƺ[٣'534*2pI'!m[ڴšxMb&̑ (%6RЈRԢcM1lr>дփsFG!'ɇ.ϧ>tmV5. 2@}YG ZY9:DIV;d&: 7dP"vb5d|}*!̘H<kPdK^QPvCFuEK'>ܷ`nɎnQn;Wzʔ-y0q^L[3wEK[_ru+dVpa6W}1LΘbUP0Yf5YTl` @%gXL"b$yrk)5%7QFΔRCP='b⤆s2`p0Z5/ /|P_x.QYCEZ n_UyBN>Moc[teM [HH*h9)36d*ch^,LJ`M|v1(T@qUeOCoJ=v:K! ءcWV^{D7i7A#CdU۸TJhҰu`Zjk30[6(f5J,PDȨ 쀁(>JT[T}ĹA}+E9nq_<`G="q7FnL籖1B6bRJ; ,l&G[J[[KbA=b (1[4:T)6?JʄiorHЛ5Ƶ9GL;4{y2}u%ŁUX3/ޤ 5ٚD;J# 6)L598RX 1f KR t/>_8vp}Ϋ6_ki~5,1Qò/ҾCXD`Տws̋iw%ɬ;w({Lɱ);qBC.TIeQ_+x(bևIJ3KUS\k lR %Ѱz\dE+GV%s l֠!lL^5O&BT͞䙫kɜ?wdnc4!2ω7YΝ:|M9"]46G-~DdsdsojQA6MV[BpȚ2AIH>b< _F8Cˆ5o/@ܛ>3rcqtrx\b/鏤@NQ\$fes))9U5&F5hIQ[($tsA.!m@)dhuE9_ގV_b֊SS iXCڥ6BPu=%((#]⡶Pt:R}g۾pēt~5y{wzΦGeb?Of-2|Iv)'g]~RIbzrɿTHsb/_gn(yųҗ[gx& ZRFHաwF-9O qǯ(tc)C1fsv[g62ᝋ[>}NrviM|O'pɌ?i!dqz'z`qW#p^|d7ZZY}~kZa (k'WiudyzKɯ}#?;[[}28 LPDߧz4=||&\ >ЯdyE{NtˋMgÓ!\>7 c~fOw]&ٯzŖYSi]]ʤD3?^M瓼D/՟KE(ϥǥw!$b7 5&ch$x8ɿVhjNL 6)jQ0Z)C}į|P%)%BpXt{^,>5Dt`Y Ihe'9 E:yqa8҉avr՝o<9ªtiJwQ: ^~) !w.#}X[-*%%ɲ7L|)qS.ݫw6dʂwE69Э96N~9/)m4sx>'OsbĢZkP= ֡+lHFxW hVw [(/r +bD93;E(?CgdgTDَhhQR >\VK^ Վ٠} {0#ZWk.~UrPѽsՖzek6d/*ké^E6{aOg=(n}Sv0ןGW3|=Enn G[l>5b7 f^Ff߮OYyߨ#5l&g w~BxnzM̜5g˟ 4gc UZ>3χSiU_abpyqOu:V?~Eq*+rQ@e :vݮS3YԳ8:ٺdq*a`Sl%U>-rL΃ׇRbhly?4ևٸ}xExN+}z9PjyrtZSIIAYo8D%pAI&[ %{o6Cb9P(d) G,0mIr I*4T:G2.iIZծH?atwVq?}5E] HLr=XZH \h5Ҩ=q !<9,ui'S`Eg蔓VizZMG }D=~g!>Y/PYA/\3# \B2 Gr&:N##4A@2I&4U*^:nbT5!P9+&R$H#2fZ; s9E .)]CXxIS i|?tR /'M,N K㷯mSx6ɳޯ|NFe W 9TFS|V\\z2L~p1|⇋o_|66po}W!cTHǃѻl+v(.xgfYd|ߟ~Ԏx?|z݋9}הa)|5A{u__φI_B}߿tWq|zGfs-:vGO-3rF~ Kh8w9YK/Gg5͌Xgχ7.T :O_SRrjXb FSIK}[|7o3; wYs`SS6n uj/%{K;oz&0+޴rpNzvd朶ģkMvt^ߴRa.|e*ȥu{MMnrn_H4c&4˪ ji_1cg{&1[{I>p^̞W:cmfIj2wf&+6h BMDx&MVT6/ sfSqgiYA {;$&mNnp8x˅XbȅP}_Y8//_YL'dGG';x]@[\էKr07aywVGxhnoώ^O<>fuNo%3U*S}vz "vTٛS\f/!?ЄYheyS:)m#rDΦ~{Z|PMߏًDr{ rL0\gj5g* t Ndx$.'5ɉ4r혭(r U &s"1D3%jOVN_-v<"[v|܂@\ ,oATpɜErʃVJ4oh4'qQ;CaL!R7NXP 3W&9'1fڢ FCE ϗ~BW̄}gʬƥ.ŭWd.s(>uԳmyyDPOYG-dKeϴWd.bAq[l+|:ݻvϛPI&$u4۴\ot#ٮR(o6ȧd2HqTF'_ MutЕzmET*t۫=Ge{칭hyB<'A%y.3FKg J<ܘ`Q#eJIa6F a;mpV[8Bd(<"e!A+)EΆ&\)FZ~Ao29E!4FPHBmUED:7ZJ*8m˖ p:Q:<>^4b-lZ$ vhi|AéeEU&$h!Ppʒ@#k"??0W1H]?bPָg:#9Ei){ol_xYO?( J [[i.e%D2e,VThKsX[Ubt qIi@`S?IO"IAl΅".d/ Ƒ2/á*Y/a~7pa K45zm X=X3:$?G:U`ШGe.#q4fߥq}ҳoErϟ$g-y)TL%ÃFK{ZAn)6KN1 SAV:j;) QŔFm + 1Hp%H[*΢o/|T>8+CO8;<[`^ѳP]X-_ =[9r#ragP> 8{2[|J!/L=T:BǂTVk|B=^?I"|3Ik`{&5)i{&Fxr/uyJ5#n|;CWFOQwvU !AɱCU HO!,lQ Y땕1&ŦkՄAiK&L-cXxǰp=q}9|?~_5bݥA-f12} }~q;|^ _eL:"lb]+d@t#{V"֓x>׸?5j7n|?=퀋툴 ia#i AZi֞WRY! CKES6 "2~7bo~ڊouEMVP ך$)W\2)Wٖ 7j fU9u&[I~W0t9ՈYܵ{$.\{r%rbt\&jM8=`b_t02/L36a $jv|Ґ"-Nu7|οiꗿ{{g Y{bZ&!'31Xs\&HUrB SfXm({2)6ubXc%Cl\Uv~ڡĹcYbn J;6zXMz#N 9F"C{ĊHs]0 0- 䇍+YHDȂ 5JZ|GɁJ%"s?{4-xcS{D=bEc-Z@I)Q(cMTJ۽bvmKbA=bZ)Z-uPI%eljg,43i̍}`88h ywj\۴9lnHf_NjO5%v beF@()L5Y?S!eD/އ_8vli`a=152?~q5Il?>bQ}sܘ8ُ"gws'MĵZ?nnMZTͭIivs#՚wm_R6w 9B)0uPjy$rܮ f,ۂ eyL dyT}PK!!)+hn# ^ϼX0Ezƽ{;|Kw'2|W' de2gV.vrvJ5C klϧoVC5d΂΃ïk1R޵ZVj Xwx 7VԮ6[.Dq.ΥB䄤jJ[AsNgӨ`_8W rD M!&CCl(ֺ .evVѐ5MAUCY7Qr-|b[|qP[Lv mL`{<9Hg_4և cGOt,ɬ-ַ>|h'g=]N1l5c='ZK 3O+x"\%Q~hƳoG2-+%hk LD ^W;*缟{;=[\sH:fl[\_hB\ LE^4wYf;츉}vz Is#ޑ%y\~'z닇:-F`F|dW=ZZ>{#t"snK;pzA~rN,.o~o3?'~89=="8[!LPޟ tgwnM8HDf? Q5#iMkGBZ=a`0&梲C,<;}L~0|1quF]`?y*䥍IGd.NkJO{W_NWojE't8=]Q?w~|~l?O~ yO?Ȭyǜ[ϵ";; x oPCxˡ[ mԐO}o3. kƽj>!>泹7 cO~>Yt ZDmo|֔x@Ws׳2)_{"$ޢJ>~] Xc۷s 28 1NroqR45i &V 6)J [ڑEc="_J y\pxz[ŧ`l -$! M.*Qv5'!PcU2;بBq_g}9n@:a% )!]vwC.^Ucdh,?Y'otS?!HX[.*% #l[*^X(hs'TRII+QOW\jkܨڻ1&_u.eX2b4k[*QW$h)U1WoK( U687o6udq}N.g.(&M榚×u2` oGtHȸ ](W~;) WVG%Z8|G c5Zr)˪ZH5{RTm0Z؋ a@.t\Uښ7 Xk94!l.sGs38xt]io|6Dzp{tۿrCc9|JW_cOT`ϢZkPF=  FȢ-h)AFxO3 hNM^ݿ*Qp*G;F0/)=Ũf;T'X DJuE3WھRZ+\tL\lhX~Ng]-H]AtĚ*f+Ol9 ]o%_:ƣ6Be宽5+[7JM,jn'NzSZdz>8ʤhޕq$2_>Aطv&FlB#dx86 W=9:HP"e$tWU"Od4< ]@< ópS:8Ht4'07;br\;M !*!U8C%@&GfljX ^63 |0) O'y^m=}, 1O!e('pfe +2_nt[vݚ# 2d`7`vlu^^M‘p@kga¤]Ыu^Zr[5bae9'dSdezQDp`&5N+^w1YlO.uOa8{;:#Dkf!̡evn gDύa/课{RDg ͜_NS֬?ns!7Τ;NM'؇=NUT;mW1tO\:¾"׆)ann,RԂFwNԶnRt}`1ϲI*idkϏS8"e僖ީ1hQEBRLq^XC[F, ^"o]btjO֘pq 퇿u/~Loo*Xs`]lR6]OP X"tg )FV83L9m K)WnKbӗV?Px  sskE75ܶ&` cYV׫:/Bp@^h7.?'?뵩 o{LJ~hR7}g.jͿw*PX\xY8ԽM*5 -4X份AzT.0[+~񍥭.wK0-ź2_J3ʡls=ֵvV6nӮR!k[l.b EA>m ftaFg&UmFvuaҚJn*3@K.< ߖfרd*h7_/K]; 4ml%=h^+g׻ Y\%r LK^x)I"!R4N5`%g@<@>c;gģ-s%O%ھųe,G# ,9[ s`@B#p:VE:xwmG%t4ߪeƚ˚qm/c".z]i? fR}GJ֓>/"sL~] ?Д8T6gJ\#rtpcL*d{mֈm(ZCIN~J?`s-bK6Zb&XwcE[:=V)n[EĖPL9A [LYrPC\+CܸئqwX[.>͠U/WLY, bxfMWQ<6EXg8ʜys΃0^)sak۰Uv .$E4)Z9g]˷:)Vs]akJserZz=aխuޕK ys|쵲2aN 0GM,+&9NxQ >]8;o9mfv?^h&0Ify+ѓ4'I14!E7e(၃Wl&?$9'G܏`7N}9}2 #W߷bPCQIDaBeQ,%".X|3-A*  zJt!ܻMBWq{zfBR6ɸ{Q|(*=S{klI4 L̟Y&/I/B)0\qXtԟ>ՍzၢJ#a0|a\RY6:CdCXKg:%fj?{ty LqY̙C:<NcʠF)(;{ -]@-rao=aJ4+5bguS M'ĒS/a,%|tbu:ըqJTj̶K]ԕثu>G_uvF]%rwG]%jvuT{u+B)A|U"V")Z[mJT.Օq>v?MN8*%E60QQZ|oΞϒ ~t4/̈́'G˕׆ iDn8c -hG77Cx=\#ܣu@sqne!OaJ{5h7e TA5?9qvp5re5cAJrhFgo -cewszyZwfő-|1~Xa~Y׵7_9;ZH6`AksƝWI9]j^Ks}2}0$},fR@4me m)  XS@$i0W#ƹȩ L`KA5,H"hHE?7 h l|1Zݝ b]s>7Yf:IKR(DHpFcw8Ni"PF,)!pGU@ؓV%icZ5 XIleFTKm6*GT KIA Utȴ,sߊq:}_w0nS1 }H㢐FbH %QE\7ãy7rJRA0U^ZE3X^ka~-(7ioZm4u boI|㿭9ԜL@&8ArJ+n5{2SPVpV$@ WK H-`BL1jvjSR6%,|l1F [#gIpM9qMtz{QQsnUl6 oj+ݧ[ӮfPT3+h1ZM E!-a@ڀali6JaS KAThªq(aT;- Hk* cE|ALzd"^xπUD)/  OTiE1g &T3{وlwŵ !yȤ8U¸P,T~K ^2Xv{eZY^*WOliWZ5uiHiFW=)kN{0]7KϝI Ƙcx8-X_:9kHE>ג)E 0AؗKǐMUZRK.u*'i3,!P*`k U cJ#68R$#4~gL358tM\9E~ݫ?kLVk _0hs,+6P l ReKi RBC7H k㷠[n1Tk'EW>cY4YB2MJJ8LH:3W2deaU9^988k{,ë5i;?Z}Q ?\ қ"bo@=ɄgdNq\}IHX Y%-r> s>R=)xM&U`lQ~(멠 YrINme*pu0,6g(}p23$gF!#ȵ E8H&p%í>\TȚ͙1Q `CLArR!8.%r,%g29qw; 4 clo4+M>[|^-bHpF(Nkׯ {[l%֑[I=9yMgc] %#tB3` -s.0BsL4kY0g'`"P̦&jL\auYAp:G,AĹbA?-`ZmެV-4p\yiv JAGY ƂBa5c8'`mַJO$!EGAfHȢ5GQ)"9ho6&vK*ʰC͏]-"4lE,i99e-J k墈^Tv/TҎ"Q,gLI1*)p\LH!Z57mI4.cl"6&Fϐ^u 8uHfɮvrJvf|(*9vˌ3.A2! THJf VD]܇]<{`Ya}^#o>mGL0IHk(Y[kn_ُwsa7|ЯN?}Q4$yEFT2Y^y]'KI 6GY/=.K3K*u/[^r0< v^[١19KQl-{,guԞ@k*r`NXIs@rD)mjb$|: ﭞ|Ɠh[\ ~}ё IU&[[R9"8T*zNv.@ds䡡{PriGYٜd\m[@$ :GI- fd Rhl6hͭis?<#xkYj_Ң]4A4,SD"FK!JT+0 3Mղ}weS){/XrrRlTEAPEԷӃp߫!b'[Rʺg#8O&RF&qcJ31HŲ !,ώc FPɆv??<9f.:.*KP &e\&oUfiBՎwx )=f#Chd\jA̦JÚPjznC;0?ci(5 1a ?.4 1~ mH);5f"Y4|Pn*0e=:yq30R\2_xU &w&5Q <▜ai!bptNoȏFxz_Fc2Gs_c.pY9,'3+ĕM_i@IOO,Ϲ~? |>~#Tk׆d:J4fW[/NjU@.;FGqYq??^ZqV'WP9Ҽ,=^a_&W^_ϦYg+qlRs{z6[v!@BS帎<9J2'ڞZn`}7­fq#EL6FbY@wǛNʵ!YA{XǙM?4}9";%|̏xg0߰cԨ]xO74'>i۟\M?w'?'׷'|w7'޾U4cZܝEN77?GצY-ɧ6Bߛܕn Cw4Iֻej$[@W1IؤRMRT \xjb Ce7Xv'S,#-`~y3ȁkd@f!kJ"ےVDR"IC6pɫjvAdSgGr0,\H\h[113A{lup8;ÀϥZ.;O<˰MyG`䳗2[P)uJqnp ̕%G2U Z*i="[ikNiGttL;?YDYU2"7T'-zֲz%rI(F{nM@<,) k9Du!gi-f6eV'7RH4iSlk<3HTm+^9w~19wi<7T ݘv9k= :R{CGK/xէ᷏pYIAL8B/ W30:ZpԂGe`2!,k*ls\R #oQp rI%$,8 PF*Zy\ILQ@.#4 vpD6q88AQj@ݖhc_ÆqRZs4#M'O2֤60-tQ+sVFi*&GZmDy84ӧ5ԋܲrpP쒗߶7S{'45#R!BBJĕ : `ж,7:èaTq%!,#$ MsG6}o8g1IeI} Gl|&%vٵQ+&: ,Ǘ LrMwEo)V`Jw4]7XQT?{X au}ywYLBKNs; ^ǣa-Ppx=W(t8tлI*"Lo*EXRw 袜wbsퟝ̄w2qr nb14F#s`y] שZ_v' K'pIyhR)O$_@ <;⿧Q`\8Z!y~[pϴri.KݎuI˵.5TC!p~%A];%M֖M7z%FFR:ŭti%uwoc[sIwWtwjri8[snO+ֳA.%p[7V5XhN*3%'*'TX&<K|K1Mo/Ϣ9l&7d\ ׃Ʋs |o򿛫CP!8 v@`RjsfuDusM:n橳-p6D&=K$ůeՔIǯ_q1BQ=d{a۟htb1ٳggAIs&ʕD:GZPAt!z<ُiɫlY֜zŏK_*hP.T2,/#߬8RBo>^/XYu.zu&j"=O{JIC)@iXysA_{D]=&߯ϓ1h epW.Z տ|1*<CLUWo0ݼᐽ?i]^]x`HUPa mzA(fp<fe76 >26 oouiv9feA hb_ug5V?\tlWb=[ VKLn w2`7`%gjM̂eEy5ր OKVYGTY]c'1`vGW:F`VE:x{55CzxtUc}cUAQMb{dlPڜAݟᖡR?|'8RGTd%qmL#/L턦¥9SY}S&mo0n[0No@A;ai>CINNtJ0۹R1BRm-1NRZJ;J\ +7‭iU"bK(3*9aFhNPƆ%ncp6q/xx_Rmyn]9}MԴLvC K7}fѪ7LY(`UzHzUi/}ާs׈@PsWРR-^G޲mt˅##v²ޖhn}j[0&Tj5 DLt\T#JY%Ӕ;Ǎ*J$tBI JE+T4hӄH13 NRӌl ηeIYak|?СaN,ʝW> fur|=y}өoߺC/˻o&6 DԞrǩOrRXֹ` T ^,!ml0 э~=uGBL*`G"W3"VBsU"Pdr0Fg0 b`MTR# s6\1b J l8ΊqG0j{:j +{0>q`,"C鬊b8 ~49>SGz>rW60%LbZQ֤!% (h,4fBK] pӖZn>jN:~kK[;2xKBz 'LX.#(J=q4BrEkO芶݄+Zu$FƄzK-."EF`ťFIF mͭ4z֖XuC{c,a2 ǖa0^OlY2墍9ň5rĄF;#d0+dEp8`*WSM,P2r^G0O([CCAo%-z$ەqvrM/m)a+ M>@V˴糳2,4],e{HS駽C98^i~_q/ â}9~(^o s2( Kߒ`J(Y1G߀4}.Ai=68i$6RpZ]zV\m VkyRg$sKrPgl_> {Y*>p ʛ3FUq>3_Ldz|/RA2N2 zciXVRH.>/OJM/zqcW(oy! =\>!l`6C|)!/{'tRV&b03<8|`@aeR$4GŃg33vq9%;bϡ70fqXb "ju].\K5qg{=Gk'B(n\?}\s*naxlB64v:' nqSj?R*hҝ^Ky$"TǹS4a |FA4˖ݤ4>=XS),9B3RBj^ VI%=3t* UBIiOW'HWZ0Y c!:CW u*դt(zz4t%7lzy o]mXvdbǥʕEP-Jtkc1E+L0 ]%wZNW HtsJp ]% Pbz:A`.UDg jc0%tP2 Ռv`uB3*=?]%7OtI]`\BWV vJ( J(4]`!CWW"Jh)m;]%+=]]Iu"F4}Xbk#yn?p2vo P8𑋳}86L zl2 @ׇ[Ƣc\~JLlBC&=o7c3heμ¹QNA WsRl""jaleS<`s-:2=HiBtd/ uF\;в/*&߲=E ly$ uZEńRឮNAw+ ]%w*d+aӫnSV>.=rЊ#ŞoRDmAW] ]`Ig*+th?]%S+]RWXU;CWVvJ(nՃÈvRWUQa%G:EM;DWvp ]%tPj ҕi!JP) ]%wZENWR-UFOWBWH cu &*=]$]1Lkk=)ca1@>vam[-Թ纯L0Fl+3SWӝrR[52}TgImEʻŝQ .G]Q mW€c+TšhxJSxwvq\Jh]bN'MH_]mcmyw;H.ۡT-[ЕjצTD l=`j;-'mRNՄw\BW ;CXOW'HW04]%\v\c2h*l[|gOWBWq$T ?YStRJh9k;]%Btut%kWScoɸpT:nRcpdzz~o_^dC]}pim.h9F Yigw?%t|G a8痓/]U_}!B?{۸0X`7{W8=d<$83%֮nTߌFwF+Ϋr3V]\{6^lI$uDrefyMƋhP-8gW.o…(l)\;텯/i:E T5/^&O+N8d#jg}y3;Ό+Ǭ$j,dumtyTF3{:ݣp?鶮#m͇oV;iz6s775+?y}!xCzXFG=d) +[־{y?mvJNGՏ O`՜+rYk@̺}ub[DK 0(E67)J ,]OQf)aHQTjpn\(帰d2b2pdY-j_YE=;K >NR]ŎglKkGNXÝͩ,6Dj[-+N~{ ]|6[ޞek=wVm^_- ]-[]j<^y.j+֍5nEjs6-a!NkZ @Uؤ6<I~7^T%ŋ[|Ơފ"nQ_GvZFgshߋ9G9%7;N8~֕iDKi÷@Ä\<)?28kOԨ D"W%tAҟth.thVCt LjA{U ͈39aB3Aӝe 鷦Sj_qƋL/!{ZIWS,ZcswD&WsNE-\rha"*%cytva:P3fѲHJ#ڋџY\EB{yڛ4CH{A*W(0DY\ޛmYZmY z)pe'B%5,},|M,T|>p%I(X'>0qVs1dMS r3&~~nf~kUW~R^gnc*U\زm4r#$*#\OףHsFөk[Mo߮u%벡X7ڏWk#G?S0 1J=A U~JM+9k#Xt/h][&Q(&6F =cq. I)S.hCS0$zdVϢ;[BZmyl';.Yj:Ew~G_ޣ+yU@1TW(j͒}G~)U/o2zyzDv4/UMn>Nټ@3g@E BYFP  h:Gˬn@hk! T帷R`@RK !/.j@ZP)gd[Ƣ^$4N@TÏ4 !C%E2R(1%sq|6 FfE9s?\C ; NN(YU߬#HF,ZYDSeB* {'זޏP#2@ZxY`^d @/hee.s-ʩ$~ZbM8߃v2gɷMFfm<5~~-o:hKQ_}UeL \3я̩2hWy g"N&91QٹM+*!NYhYOnegTil|\BQOCό5]-F"7,ꭡ%gOW#Ջlv2=.Ger[p|VCLɕ虠< 1gW}zG z:z0KkQJY{a7~9度Q[{ qUoU}|]YO`!|О7v02Ou9Ps[A=8jt?B ٞn ,F )}C=lJ0LEמiq<]MKVѯ+]p9#7 mR%c):|* fPp 1PKw1Qh4/|tTȬHx JAF*hl9>TX,~}<#%8n9m'kWse ҳ-Tኼ9Wog:^y :@t<.7OFoz#bsu}T~Npht׋º m7LjB՘jv0-ndQ O(ՓP*JGT \8T;.g٨+R@r*!;Dh)Q) Jrc@@Snx;nA5S|֧ߓ~!f^ܸ:kDTO g77>5<лgEn*Rs5$הP!v+O&v`ZOĴ4o=e"fMp?ϗ(X0.,ق(ǔJJ8\t{][zԈ=zңd9ݷzkRr"1%T`DQW^ϥ)bdx3I /6.%eK빱1$II$Q;ii OmE"tq;tG'K4w,%-/y?Ť?sjȭ!~'D^1ͻcOD1OQ`AuoKfqyoҪoR|]H"v[5-ާ.ZAERPAlj,)hyhr$h Jp)$$CU4S'ÜcBTxWd~0I|a-nmͪjm]PثfRv0>7'C*Z9 " C}$JhD-qI?Zo(.gjf˖ @k)vD nj:Pɨ # mHtyTCKAK鎥UN&j<0g8s; K NDJK\ 1f ć4(6nƂ$Gb)$)1L4tHq1`l3ruKʣhgHܠo"8&m<2[Xrrtxc:dc썾mU't!=/&xt†UN ?9/ x(ġ  Ǒ2*Ia7 ]_"UZ_";3ʸ3\$D㑚:fAۖjG &h gZP颌:Fo3THY"!(OHcm֚8 潐!)/N?}^5?8Ƈ F'iй=Q"V pϝJ*t€6h||#(Uh(p,ie(f\rAiMlP㻼^#-ES[F]ZquVFq׷a,ƹ#Dh5}Clrڞr$<[lT{{c[?Ny Hd)ncd%'AB& r%P/z_Pdq2^,9bK͓bDl`4p.7F3%sx #]a wmI &c/Qq׈ p^AD">l+FU̐DR5iLI3_WUZBIM* %ѩƊHHхTmh~A ձ-jvQ=j%u+^RY0FjDy9m" sդAjJ@L3ƑjW\H1 $C@vF D9#`\0G:HvHk܎YOg w[[""iVs3vnB,lbbpBc\󶊈ZaLa3uS/ gN h҄Ij$f.Uqh[#vDpU[qY3.:[[%b˅9׭x˰Zva1H0# in [YTD("K4=.>. :C2n86K?x.y58E}3y?R~`h`FM ϖBvZFK*Y*auॡ(|gl\e.[M>ꐕ\1 aV nQOe.1Rg*fip^8 MfV NW+=)l:w Z夸pD`Qc!_a< =VĜH`NCZ|}= LG*2).L*"TX>*͸JఀFbc9 d̺QBRHuR༷i,(05J\p88#+_e^ƃIu&i1% xB#(2 !#.٭(qk*]FưR]{B\Q "H2%QP+^yIb:v26ガ0,7w+KB13??fThJEpaݞL DbO֯d 8 b!B1 I]m?ָ(1xh\l?տ'7M ͝ls>y\eQĎTZ|3Nvqʓe e!r0DRP7w`pa{ A鲾TЏN[lJ<R\ތ\ u'l?=I('|lv^ 0E?|pY Mz`f6e$t'K%&3SQ+٣idzG?W%\+3խy2L[uYtrV_8ZΖR3b.n?gR= ,$W&<:3B)ChH#1|UÐPrч c`Ŵf1ɺclyUzV"?8C#i`ȥcX$K!>Š21T- XWuugś_~~K|>;7'N<v LK?WMo"{pcвafhCSSɸG^1Mz}Yޚ}sw\ediG:lwЕ |6JtDžTz*|{p>Cb;A`Y/.Z4q'=<$;H}zb<Z9ÞШ9`T(&=HHj䝢IPX@^]4:|PL @,BqPI\]bpu4 ҚuZ9t;--HNfבlг fWŪqL5/eV+uJO'2X,I< eJ+gI1'0[⭴qVnG t/O{?YY"q/a T[!X)|AS%4,J`U;M1T)]r'V逰ZPZ^ wE%KR vȹ]NHT>cuKhwGX4篨)&^}p[^) Xvm^:LґDåW-G!rF_KGRbJ[|_=x/}f)I!RPrd{B(i.3!eP~  GX1D.&CD{lhA|K)S'9 QS&GZZ#v0q4-୅(J,6y^vI -ӌ_Z ̫,`b RL2aj(5*`3-1JFgy[aZzJiҰ"b[N`X )D7IQB*e+GZ`,[G@)/P\*+"|p,{ ۬OQǑP0*UJw>E PFxBT6{#1!x𴱘b5v|#)97NpB@ʔ$VTq -EUL a~=%9 BŻ*HJt\f'<vaӉn8Gk6dtz:9LӍ#psGL%N2 BHJHemt{Q9V;f`zZ ~1W nb XS1s>> |XC~Ż,>i> gf. ZKQk uHgٷq/dt*2]OGk e #傘l@꼸Lg'i M'W6{pNW '6s'?(97%lYv^9kIuS94IYu\Ť>%ϟZUtǷt:ksK~׮!,efqnMoMK)VI̓J%O6Iq#@xuϧO h'33~30]0mty/BEۇ"u*6#?ߏZMWI6܋J:&2(PF)lpϲq:S6.=T&'?v(k 6 p,`2eZ7jZݶRT<ܛYC 264z:ūR"QTw;s.fE5w*U%|g2Ǎ.G1rTJs\ W 6J5nHMG^g|W7|!HWUZR>yX `^^V LTJQ =&PVz*Y1g$SkgxuSWO&"c bK1ya}anb0Zk5Csl B`&$X`4EHcMM&R/5eDFQ4`ЫH|,9q%l/K>4[֋l׵]Sưz#𼛒6=ӎMq|~ϑN)̱K4 &7uko^7&bi,:kJ:k6 lU`?MIӢ/P<G$'*(?Cܪ|jWuNQ; \21#z^BSv+?*SXr)q)~avzbpڅIo7ξ7="OVyO~+}Ff|b0;2ltET|.ϗ~Z!iJfb =0 {K -=6 `8_zyI HQΠ<26 376]ykY0$"ڰ-.P f |a[!jCYd  ,V%[S{yJ wwRm>]>,6h~na~y; xȢ?/Z?0E̤WWEM_h[܂_qT@d.:wņu 'ڷlu'J۪8Ѡ m)1:jJoBt MdBl}L2rs}e5Ё!`좖˖ps%g5{Zm3qe_}Ԇ0v7 N7l .Ūj" 4F lHGo^'ŕ.Y"mhh}1p,>RӂEˆ:(` 㺪qfk2VA!Jh@O%"O;dnJ#>s4IWߥˇ /zn0*mv %2%&H>9F eUNUXI9s{nu"Z|{Q pV(% X[&sHNh~&^F[V \c;p/α22fJve1*zD?կ_ݥMPKOqrE3Nɺ%ZXU0UaB**vl|5yr>]ߨ=IOۿ!;eѷ.7MIʻjEi-;'YyZlH8mثj-,9L6Uu)XK$!B\&87q5Lvi߷~\?)R&GRx-U %yDe"5ՙ*\V%Mߌ`*][)}=Dvu>h|OpܢFW&M&`̨|t8J%$KH=s凡xGow؞$uXS8{_$Ey8#X|]1e.$Z]B”lrddNYlxqI]wLڥ\uxvNo{HCɎ&tTbSU~FT)gD]9)c拞rP5=+j5DXk]RKT%]Srxʐncu/ eC!%éL\QTVJζC޲78,4;qlg+h[6{~tߍ8;9|>˭Ylbii_%]\}\ֳ=[nx1͋Ejm+͋߶nY]\}0/Ob?vؾ*%~ZnI~(=|Fb*#Sf$aǵO7' RtaAP_d%>.~6]L?.y,/ɵj[ϹnUNfZ|4l'+ߟݾ9={]G]]/=7W =̾;.6U?ڈXjCv{Rۋ=R! ˳p^:!79l2;d[b &kBMNqZl@B-u6CY #C\r -;mR-m YKé,!.VW|F"eYw_lZ0)>k 7^/N޸%{D=1VQ lj8r#k&S )N({mI,ظMd޽G=7g󊪹Ϯy~'yLlx{_,^ c>8/;ZO 4}A8+wU[X{)"քݛ}EžMpb%#Q 5n!~Վ$ڸ`j[L499Mے~Q;OBY“Mh-r޾VݜA;.y>ƻ(/x#KfX4YOje&Zi:cL08ڵ}}CI#=̔5k_%xp{腷ۡO:i(MJ'Q36vsgA.].__=Rn۠ɞD.A>c4ZzAl%gT,dZ&=yn\C#AK뻇9UoMN!uq՝\ VI)j3c5:NEdtvX6EX+L>G:QiB2ViBYlt5R0V iH߁@H% Íijl]*A(> cq(m)PU$u( 7EG'ZWJ(E,T{.%K/tb-*!h]v#ZiZo nkݻsTlV6U A9̺R@&␩Te2b!'c%qfƪfјei5GQt0⹦.;># bȨoE*B ىAZ) mPuƵ,AmĚtP:hJW (TzS"rtVl))~|$ЉԴ& NȚXy-Fv^ 8Ѕb-?77BU *9P#9Ud(I팦j\eA+Hku}O>L Tub;B1|aJqe2ikMIMhYBKҎO$&CBZeN%p(}4E;6f'" ʸX0ɤd<삇$mZUerjeNsC0˜EgŊ y#X%EזUetTb"KPefaUp2^F*%Vl6I S ]ES5kQQ\DAm!|UOhc|TEYS6+OY׫xQ0LVF[SüHGĴŷ2lSVq.Y{e0 W+}ꚥ=ccExDΈA`j"*rܶ]0j9+CYTNb'O.Ws(YW&B]``*eLsbX$PfI„"\z)}>$˜ EA7#@O Mǭ/F8y0 z 0&Ls9wHhGf#NEOwU:UzNj*6`˱dL&ja =cjpB]:?nV uWki?#1|"PS0@%LhW@&~jT|u2uk)&xb̂# cG4\ D5SiPk :ܙH͇*QQ3؄ɦI{X ڛZ(Sq;S SpaIsAjΒ wB/ 6NB?4vՕ/CP ո. (X|T0Y2ƒ`FB15z":tPSl-,'H\i$0S/4A}pA}/wł~JlqŘJi pҚ.2%ax P/?\ߌ~9wPT ޻V11S] b0c=P. \qO%sJٷ)f Ue ܇4XJFI] `*qK^Rڈ[Вd[3{-.EaGR-XU&=.%b/Z*fǢ0k&9ZMH batIJǞpZ-;*y7ru=ZD(F  V DŽ^il7Kwkrq͠`Ty Kq4kr %%g8͛a͂quЦLCD4Ly̶ N}^`#6]ZvmÃoi|y+ˏOK:e? m  c>}?7on]~M8SG' n?z~_LsqT:nn .sx%_^wzu٭a{K g;tpn9~F§H=m/+V' j1)GHAwZc()xIIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$֛J($?( /z@P뗟ʽS$ {U r&$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIhI G*xiL~$,ڸ'%w$m|\Apr\M1jvox>|J˖kU a-.:̓<^]tו_Ϟo[g[cr{wH`hܹnI?7/ۛomhуɜ)5ik~iÇ݅]w\Vwcݖ\a:d4̯(W> jE~kDZ4~Mؒu1sNZqԒC4ܼw9=}Q h.U~߮*OOw>NwOgA778LҤkm('?ܦ7S()gVLyyr݉ߋSNdʸEraMQ,YK XmKT"!ZAXpTt,\Am4fb6 V+yWU? XVSIQpB\'LG"~ 7+VKwW҉Z#T ϣvMG֦U:/Z!|Pَp~WSGf no2)׈@ﲘJf]˂TL{wa)YYJ4gZ˃J(A{]!~׾:~jW Hƒjfh<)MGH{k7f6ҷv~j yկ7?n}ݿL9_4mH]_tcw]ӗ٩ J?t&5!멸R2#?}Y_+\J-]Psε9F)!i׍Ut7gqSNW\Z5V\yӱDu7X[ZLVW/W`cL7boU.WMTiI9I@e\͓{M'Z9K ;(U\=u ,-WN~<6+Vq*}\W&鴀b \VYbwW&jf 6\\{uqb{j=6~A{=fjUqV 8(v03r\L3r#s3׵zA2] ?#gVi.ș%붅% Q?pnu.%/;}Ok ؘԍfYmTKwPi85:U=H\\9Vq*y-Uzdӧ3>9 >[򿃫yj<~a*U\=1َpənpr}W6quTZ V+c~d+u/-+VW/+k\= :9OmX|1*I փ+^mD#\A0~AkqW_ ʥm#z\9kS W,qW֧UFyԾF\y,fInprW+VwmZ~;JxlۙN%QoKE#DIԊX5-2Eu>yDthhUtE(m(Jۨ~X cwDp3MUnZV֕qk&WWv47O㎓B;W[snڼV* &"ՋgƖƣz mU/($cTN#+  ]}g@+B[z&teQmbz*wʐā_TNԆ+IW%LQ6j=ڔe[zAieUDthtym5my-]=]qmVF]Z M+BiZ:FECWX *h<]FkJLLt%b:-o:]J殎BŔjG \b+BuPZҕ++QWV5>$zS-v榹dznQQp^mixՕֲRdz: i Hz Bv#W66G(w:̩e godͧ0:[w!$~],ɿ߿`{=~^~2?臒ϰ"K63e0,9.}G!AHДK٣Ϲ'}~{oiX^#=v;O/1~c;+FdA=Q\{A6Իy?(Tϯdn28?;4C6DheJCV)\Jr!A-,0ٽDYVyꯉTob! &W%)_x9EY1=s iIZ[oS!Ty"inc ! ̞Ͽg߸ͦG ;C?ϸUFy-e7Y:qd`~ӏ|ts0_ z+}w.郷~r $,L!$Qa*]WРzyujt+ OBD%eTL6;Cvn7 ;H,G:K 659kJ8M)˲BJ2ik\TqSk-$7R2l_=Br]w{Ƀ"͙Θ`nlgp6pѭfL%MshB J*y /ɬyX«" |H=/ _L`nRlJ $8)!|i!QM\GoQeNuAס.iha0v:wp@G0 .0Q*=ƅqWԾNmo}?_%v7#|g+G9WLxl0%:i~BϚ}IyOB/Eg/;|)᠋tQѢngzc9DC67*~hd{a-@d[\`i0oY\Eɬ:w*S $ׅt¬[*Z&ZPYu ? H/ߙ`9_PG!<ލI֙~f/5a"g+"ۛh^h+6mi^V6i]c^Eg8Cr=u|-Z&K1Ͽn=7޽q*z~m2\:T%jMb-6W|~ ˲r}˛E/(т3. _- %Tʌjf]v6"`ؙfgvwT2ir/N*$c=D? y{8gRxFi*':/ NZrNQ{QISZäʫ5z<-Sa/Lx,Zϩ9ٞP_aeX'#*hV8uw so:Q{tNܫ&6_^g~8qKq i_y6R[$4Sέ4f4ı;™4흡[3'?r¦X(_y6 igFI¼-:B+.!sO1QNpA" E^JѾr􀲎) O[y"4ZeP*m\E~ 0ǬV9WJBL?<ٻ,l \iW(Ѩj$WԉX1[ïdDxd bM\exdw4]!`t4tEpE4Q5t"T-]]IƹECWWXZtBNŤ3  .wUtE(u40YDtU*"S~Қh;)NU< u,BPCY9#+M%S3A/,h&*ȤrxMlӰE.dƋ-:$_kS=rC>C"H<$dȌ$UK xI`FVʛtå9TKH"5-$f. ؂v5Oy̹b@CmpSNu^2 KWU-\zMo=Ap=Z(24D -]=36"Bl4tEpBW t"Jtutō"+++@BWV66-]!] Cof2"BWV#+i]I˙U֨t-]#])1BECWC{P6Ζu SFCW$7lH-*yƜcRhn)+/bj4Z4I,qSRݠ4p8~?X٠FE$z4hf@MRCR9mbdž$٧0ʊǯLnHD.D`0u$1s:@(3*0;ߤdmlyN>g{ݹ:ƶ^C*|˛E/+Q*X^_Z\?Ud1!8VݜYqChi%pY7m`,H vF8@R /'D0AZ}~}.5볙eo#NB戏Bl[жEA ?Qz5k;'[8nz _k|%$ݯjIɻw2z.M~ʼ\6A @pKw˲崝e/xڤov&qӷ?YLo2_m&H:]ʳh3%M*RCQUHɪQ N,BqQS[ITR($-!CR⊋@+VAVU]2X\,‒Lۜ ?1O% i(Lsm*]M7TZR\qNGDuK㭙|♌a&[y" %Thn80M *娧1K *@;4mJYA+$i+ I/z3PYd,>Ί?'ڱݼ79)Ù^Nf^֤,< Qv!{MB.V}T'cN(޾dhFt*@l~ԩOGST/b/j{=r9]J_ )lPh !/(F9Do^ulMcOr;Xm]-Z>6+nqҧznE w¥7#Dw2%jr,qaiK;!(OX.:"ƻ p@d(t٥5%9i78KQ#錋 ڒ*624/>rӝV@E&3(JKF2Uf*BQ(R3IBkdV&G.PNBBބacjN+LMhXz|Zoi~sU2YTՄo~ѣ\?`rK|N*d'R\e\*e C'Y I9[(psy+wBBTE5Άp@e G#%v^#uS^d$+.7`$c7Bz&dF!kcgpm8sC ~uƺ6qHN#uA0F^{cIR4 63N4cF,/ԘV`8+TS19lj礐h\'=|4QrV˂k̚ ͳlbc&GdIy失휪QcXbSlJRx⬖ 0*Taq'qr1w䬖erk;"ZFlYb06̈8I(]+9lj9 MQ')10pIJ7Eh؜ pVJK>2h8dŸgFY-K)G(!aݖ9Qca'q:3FXV~tr]]\[XK)aSQde7͈00C1 tِ`Tul9 Yb4&+b/m߱0hzY:Kt~O_}:I$Mh2`̤ ,D#MYn`2bNlQ95PGOϖUj_hzY"{vp`&j{tm e ;vB (hE}o ;\$Nށ8v̝cv"AclAo4:cq[qA.t[v.誸=)6Fc0Fuo'ސ79o}feJgA lWL?5KwZot闏^WS.N B?ퟙb3k~ןx4HIOOWT^w9O'u~333v+kxE_ԲAǞ~8)$pMIOǫJ# ثwf_q߮íWFk~@o  Ee9A c^!`OXnwXsf'ۖEqoڎ3`k%LDð-*4X%fXV~A4 3bc˝hHd1v)(9+lLtvsmfYV2_/̑ɫT c}W/~+s1jO{Si,i4y\{!QM:q8~vf*%'\J d x]\ԩ42iAKe~}|u+O/}ogQ"=Y/ƭdG,uSM:ME*WI8t˂RVrhK9{f0*/pGkU#|,jFdUmESTmޔ6\EihC\Q.N1"9O4;+m5],!5ǃ T<jv}xCwdx;`yGBZ753|*/\×B$@(m9~S_ܯM"4M\-Y5 H U8ljN>AN:1H٭^%dPt̻1/uT+ܺ6Wݣ)Ww!|'Mp=^֔'L{&E 2gM AbFk/Py \e=#cWyg|'OI*q tq JW*\s?Ru|jc7ŗzbH=H'H>@>6GS-c:cy{8^ǶG,J(BᯗWzIusV[@ xv#( *]q! pFϖ:qnH0d'vP0Nv,C 罌4!|]ԩH08iC`vg#zdg;apʼv<*ہ]QS-RE]mkB ]dEZ+E2[WxDR ]:kXy\9!C8ʼnP@,R|>(}zcI"JErRd _>-ZE@~jN}d !>TK}#unwa@O%Cd9$v%-٢{&Ox\ KK^@"d*D|~9IX/ƱSxw4'pWUzMCyŪDܼ^GDR}*|krhHMgH ?!CsZoS·FbvJrن IPMp(,"  Byn@uzf\/}vy*(!' 6Y"TjOWGZb *Eȝ8nO|.dWYza/z2˃[ r2,aWv^w@,LJ9X a)t& #x/w d7P w sGdK +i~`yQoo MyS;YV8*oWa1DX=8pWg;]=z1QKpP2Q^'zX͘;4όJќ.QcCYDn~|1 i]mrtvΰ[GV߂*K+AxsfeЩ74b"ɾ_?+ep}_Xho&#LR)=:<:{B Z(q}A\0u4sҏuk_ ӅJ(*b y2^s.i)QF7;z@G̫A3ɮ?yj t E=:ѠRbt?g{Wn($ ]>phωQI07v_!ZsBvW>}XDPOWD#`R1} 84y[}vڿT+L$] R(B3ead((Q(Q\&WElœͶuMT&BBpgu [?lvzS+%w~HEYyH4N1e=yxo:KV DM4KDo<,@doujShYZgu *yG ҝ-U&Aj~"?{ N\W(}Ǵjmwq%Bl+Vڭm|@ y|0"apH;R9Eڷ╵."AELT>b2P})&J0>"N8bgu=v 8Ǫ] [Oq|A^Yx1M~-t.ɹ.WO'5(Jhxv4mx)6gk o;^ApW)LgX\ǟۧ^J~^6Z^ d/COT?~?1cO?[*,qBq! 9dK 0h Tf{˓_GJ!]9ύ95Y:RBaI6D 1bdqV0SP7^#%S;`EQwpXR%THWDí8@لV@KP2fjy0m S~F46:yO\؎j#O3PR"YȲ^r*QX6}URhbw@J7ʆ5Qҍ,SHYsXa(CN=aOHe9ie BD)c>.m*q6cyK*JpĎ\O"h |GƂ3>xpP =wy< )P&̚`^7չ)+-#(!o i) 8 bt9G\j&B-hU;ة$cƷ `#/^_ =Mh8<5д yHB>2ؾ~S) r&KT LP {LM{¥`EPH Rb@"0ԧ!K4?p_.$ JL h ZDOii(`9oM0ކavi|:z eGLZ 42TFb`Gy]N e @EdH"!"h| 6&ЮWG 2:}*=WGSV91'/@1VZʹo(6BIU4L=rg Z]$?Ek {ưl=R63FnT`FkJ C0S8gb iwoSlmijL)8 ^95^ *+Pl%B O!^xBL4Uj5ǽq0~4b}= dW՜?*Cm˨GL58PdCmI6H}m4F_.Tdqʉqz̔TtZ *"8_t[$IzvL#)qf.ӍR\6y6In[5dqd͈Tɭ^m"_TR_ըr=kB`&#,)5UEbE ӎB~V!M@c9]%RFv>̊tNu;=*夁kaaY3j>'4ޕ[50soWCo N2FY2`}7ҮB,4 xȳh=R2ia𚛈Ժ贔=QHrlL _"Kؕ8 2h>n/;p¥Vcҹ 9pa19Jb+oD4/@Hx1X$>Cb^&yu&yy~ibl@ L @7IQl^G~pM=qt8b Q! MVZBIn;ͫy 41 xK4iMFhY* WL$remir)љlyLg7mHG}_{U!`h'+$8po8=$Z#6kkQI{E-5//qq;֕5^n&,|D7ˬ$!+7>ڨUALY/],zn ;F9J|Tf8 ,yԇ~Q7^(r6(gܷ͖]AҞNS3wLt5W˟>C d|ZΒww!93,Kh.;@j^3oɚ%"@7ThIyθ pYW.qHfeZ0=[2lHwWiyjuWQrYpzt(Yr)!7yccyFݨZu6o~ĴKN,t^jHJdQs%]4)3s]c̾HSΣ;|tkK<,mjCu:'&np1X/ySqoRD))ZseKΛsIL798T{~sFC ҺK[HWSqV:S~04Dwq 7ZkpLFsN*!e5eW9^|ztM\d! }.j @zPh0WYϯ!ݫ=qU_R)G˺& ~.YqbTyj[,媊k8Y A0Tv,k&'؆@"@åώf]pڡk1{"硩c$r6\ ݡ!Z*_g/#Fhᳮ۹0r":[u\S1#ta- ܸ(pYvJ!x=4hbQzhPF[B"+hl764 wͰyr+ cː`dp#"< "\(8na_R@Eېڐc)+CfK / Rx N%Gp  +"j񹵭`pXnvU-O!ws<$:FW[ZtK( V&겠J"MhcK42*唂W7ww:-^t KH)mW}Dt-Lűe1x)} 9 g;x:NA(>G0RSLn5K9-ŎH (cz 4==`% }BiInEQ)03PAl#lE,Ѳ^Y0ڠթ rT_s@zCmn/4(:{x#lņМSR t7@fH}|Ǖ0T 9^__a YȔCXxPz-8e\L>;j$o+n'r11/{z֗RKjV 1t$4RZ&&@F؈T 1mdl1='$PmNG: I >LR̨ 䠽/I!⤲C]/iӄ=A4 mh"JRMfEKY`2f,`5~k T`oWhjT"NԋFyhuT110PicdYMbJ+E j"%ҩ@|͊Z㵉#|6#A޺1t 2AfX!]^1*jh|^̫_Dءn ѸjmPBR6\>ue5YEТ1ids3g9|%{TbV4r¥n;#̈"2BUfBȉF>burP=A

g QQ ?VRh T>Y3!Ñ\vOp'b$S2J:ԭaB Xhk$#aɹMhM'O =!(l #xsp. w@Է ҭGh>7.CwȸA /X x (Y:@XqճR,E poB>ԩb L#%Gp<7, 2vK&d4!*ډc%)%qQuȸJ0͈C@!۞ kvޝGBgŚĴԈsPm$hl"A.Tu<"OS %47b=׎{+e~Ѣ~WD5FAlX#`IKUPNPBF 0$XON~K%3!\2`%5äH\*#6J֬HyNWFx2(J ~4Rq1!fi@"nDqk:V١NwfŋhXD>ϕ2ۦL?zsd_qC@p+D%sjCX>^$*= D˵tPy5^ גJX`'wPb!|btXؠ*n|" b>RHcu}Щ^pF9YXVuM12H%e+-<rɕuC0U"8}*zip|3ͲV&.+LL?zF84#dx3gY,YW<µkC((V6Dpax}-c+Shqˬj%q,ckd<5Ӈ !;8XvaBG.ĉ*+/؜Gֲ4cIp Xm%Âi#!3VlTxqei1};b؜f%y{"2n#1P4B2TU'J2#Kԥ 0o=~M>ھ- S\Hu%ˈ$'HFT|9Kq_qnNeHT>mն쪗"5'HFm<|N#Y ~hYP?qni$COxu섕T/a7&m!gPigGأ QT"M.b@ܼ?yb*W ˵HoRgg) X$CC !yX?͋ȽY= GC`m2,Y|W 8e&H-5"*q D%~XF`6zt=mށ)R ԸO&NJDa0_.҉=F(yISPwmW0!+nc4C"jKAfFXqЌ0%x3tvO0:QOo S3ԞLQBD*E2Zuo0)*k9"AdӜ8O퐾"&Y[c_^MaCuvYhxjkdu9" H,|Xp?U^\AmtLXK,r@# o`{y; PYLX#rK:Pdcِ#HS<}2B]WNfs}"Hl(0z%玪RL#J*ϥR Bq):3.\6?{PnyZn՟^ï FxkEPްpW&hQU0D5g[ڋI= ANڞ є!aͤu9 EHMaZ";8ar5;yi\ijb{47gA#}7 =Ҭ֛)JPB@7&ga!#tN{Zp"xּ@xF;djCsN@\?0 R6~֦7.޵5)hՀ T;&ncz^7So'ZZQ3*n4~A3ڌ:: 0*쌙4`Xɡ=dHeA8hv;٘|9Ź 9yi7\ ިцicѡCq84;ݱcYLpgJR&h?2Nsǵf&b ~i+%:UϨ"qPqDO8<ž:g;~$hU+ cj&P!r}~̐)pzI8fSZl<+JLRE5iYDbRTG݄LZpWZ/-HqZEt+}•j欴,H+ zO](|`?纭'-l|~$ exIEj<-_ָ'F#o{U$]o`=zK/-9J.?iZ]>Ռ7wuHLO$eЅ =S؈??L8;QLAxL?r|S|-iEQ_OII%\g7EgF롣=1Bд&ғ~stoX*ef{l`iΆX6(N0AbXry]]Sd;aH Ea (_y'GRw'l9MŢ__Ճ87syItc|L 1Øy?=Ūo\ƥBxx_nSa}Ƴ~VhncfR\'FZkN$g0cB5#De0Qfxews drC~jg}0ÿ;[1|#cc!l86́%=+!mf X b86ɋ\[֔ٮn=٬U*t̯JeQl(BU|D _` _MaòC3m\;6|p$W hE=vaSc^+1!$ |t :ʣ%CYM&U DFqͣ89 PΨ0#V5, .༤M l *HkWIQ 95ؙ~_3쥫gBeY7L񡃚zr1pM1¥bI0ˆփOlyx0!Ei/ZV_4y?~Z0)*$|\U5_fEu7_LŤv,.~~y}]&'.pϬyd}'ZObk>ɦFUQUѧz`(zQu\i:yTud-3ANdU?)aXLKs˦o6gMgz˭_/LR3 4;à!p&.I^ʍ9JkJO E?g?<< lnnS3E[SitDQU❿GnPA5⪁hqd$U7c5LD< ;NC}Hqq9@8(htנN de:#ʊqqdLs ?}/ `v*5XPl'Hb3)d>w+UP뻫?"P*09 {e}l9yD= (g۷O_wL'3l49>=߆~1{7&'{SӽMI MViW<]ak K]91XXk-*1:r)r 5o30Uۿ;̄o:K# Cy%\K^sYFVGg$l2GW,P=DFO6RNH~_z?D`({ GƷ?}}ަ:ӷХgۖOйB]aޡcdJϪx'Qnm{Uj5#oea#0=޻./-*xZYcFX.N˳#/FK _qqfy%>Ӫiv5Okn᭒p? J֥YQ&F3 2Aj*uUݎ%Մa%6K 50ޞ%]_nUy^.eq9x2!zq1,Me߃|,jb1<਋º%V^KJ`f+2ƫuVҚ۷ (r 1:?z;^ivU^_95Z-q*RU ´ǪZ[a|OX! t.ٵ »`W%OV1 <2p<:-am-3ezxqT}㧰 L`RQE֘Ĩo,pQdyr#m(C?,_%vT3\F:>񚺈O׉'H| ћ)N|J6~>kxSt:UO}9U뎸!]6~-+}?ԲӝUgj096&1Qv?vݛF[,Dڏr?[[JJo^ߠ\uuhTQ`JOQ Lv1` WnB ,lC^e=TMSNjN3:8|gg G})H6!yzj$U>oC$2υ9jDWo*5zU/숼t*#\~V1Ԁ ޱ8,#aީјGVG҄-,(Re+E\:Z+lD&V'-rl]]ۮilAmS]k>I~6 6ݼ_h_PZZw RhxgO2w!K AXq~c%MgoLk\ldrݗm7DN2kc^Ixro!L‰u5̜[Nk=B>σX Vù4EV0VͭGK1v`lݓ~4Ue _1Yuc=LSv@zc; yJ%`\Bcș Sx},3a LoU$ՙ(F/ϣB".z"g1h$,)wBH)l9B^'nVi9 c8_ D+:wߐ\⮙}Μ9eE0 ' ¤^1ځ`i uVCQX ,$Nd<\&ʝ+pR,:XJB BT1lm -2#nrzT:Uw?rYCujzw{\Tg`A,w\%'A=S]F!Ly&Lbͪu!k>Cu:ޣEE=Fb2soWh2ǂUp< US@< ݀JhutxaD<&pAD";tn~`}pTI #ˋQyTVrG"as^ h6,voD)Js# uB;>.B&r%" A,05H 95^ M*jc!BqjJi%_pT3W*;51DǷfENiC2wQmئ e `D\UJ|/Zp8ֹҢ ;6z/ƓѷEh|XzcE|`㘢c~T#lO4Ҁ)ēH9NءK{lz< D$}-*@$w4ae#t ףȤ䄶PpyىtZ$.69+)Cc[E_sA0{\LPحjΠєk7F:?X@LRy,}DjW]P-H1u967%L~$-= ͱ¼뜉JmBr ~-=F]{$cܲ ,ńVi$8 6Dy0LR:| k]UumךCr}.HgS#cǝ3 sG)ń nөcn9߉v8)s9I>E05&98` ( Qf1 L!UB\ ,,-7qAz Ab]A*M:STWvȔ|oC?~7{LkNaƯ صSr%0'bBMM64~5{ڂ.-XbPIꩆ_hz)!}t.dhxi.p {\;4Dr&P,)=rpn'΃zSrsi"ͯ\Ar{ًq8%˷ZcJl>W>. sQ՜\՜s>]ݡ bA`[#DT:pF (bŰOsDZNN)ϼf3x/]M{ zbk N=ynz5 )ejrٻ $EP㊺_,EFuW"'Ysi[>"a vXчj<ՎB%(/}W?M5!]ySNJ{~XѢDt.#cqk.nkH’"m@=)adqV\5渷XbѼP|^ŴR8 \U[Hv^yPcQ}XPzuSaI^qJi;K(mP4(7`-q &ZlGavcLL^#sR X[ 46]'1~a" c`o=Xr#<kfTt b-U_i8 $}QgM!&dMi>M?ߧ4>M5~q-wTi5Z"Q><KR N$D; q5m (kp1ZrMvzMPm%H_Q-KvKݭ$ XYUUhZ=BXowM4\x;>bQ|}b&S?с\g?M+.Jk^k5=Zg{ Gj(ۆOoO)UTtC)PtV=Rmc=¨ǽi}8x!|h.a#R8 !b`9Ȏ96 *MA=Wz`IsOц} wB?B({R{)5FN@"KrfyǬ)'=%;Xrs4J[t&#s mV|6SqR3=$4hXˋFngaBufGCDmQ:JE,D> ">y97DdZ Vkb^;l7&m+_ߗ@}lp9~{)X-ڕ+aR.t4haf.[폵Rgc)VR~ ֭ߖoKk5zkq,S]tak]:MӕZm4%3؊OTs$=:rԜ]6֖_ֶ}:˶@t-,ݻ=7y_e hcxoO&˾7 YԦ>bvLOMh79^"rŏǽ%`cW3W9LW7dzfm)3Y*];ꣾ( C7m`j2F=E*9٭^ngPP-24_$9.Id730/=>g}zy=n\uY?KUq_^~`ֿtJ1nȅZ?[G,eW;ni9ۻ/]g7j\. U?bV+ ;ʮzbR..;U(y~= OX`+}zY%mH$#LNoouDQdGhztY-R*#V`\#2b} OnM~Z}yj]Q aRj"U 4g ӓ>$!CXkp{bΐv8SCh3eHkak SH4>S+hOdIt+ 1ghN1QH#8?HR ԃkfr{1"f>[5Ƥmٮiq io"aZWfGAl<)HI V@2A=%0P}Cl}Ya5 ȃ\;pLcw0q/@_lul(81[P~G{&xk?2P.yF4 ?Qbl,Zh2f,Z#z(ƞRkʮo PVC(X3cw~]~F佡!TsCwmׇ|h.85[ۮCF%2[)xoU :v+skB 'H,9S17H->$ pڸzۉZ]o;Ộzu7x(jiƻvc}X Qo}P hT~E}C>)@>ts񑰔ʞJdB):41N ;#('VEO)=41Po@j5:TG*L2ERA{n3ۊ#퀡k/ưg/u:8~jWP[)xouֲߢdv&<譩PpC܋aOVhwX ]=Ѿ=AtLhL:H'z'v#a F֮X ehnʆr u= mk"+ֹ64 ŌZc:[Y Q%zfA{i^ RR=xx'kq-bBjYZ/dyA$ꠥG}u1~\8ʣ0&Jǩ(dUuN qQZr$2%o l<2؎’OaVY/9uscLe)(WDEeGMS [ʭc ϑvŅ)&脫Ūtn2 Ts>AXI!C"@"ӄQQ@ԜjGNsRu<9㲎a%/B@]pduJV"aVX6_ zsxdHmQI7br^eɚ!JhQ=pj9&2DU( 2322`5RM;six$G g.U7YuE>\^̦ BLL7d !脺kdR_X E@}-b4hQm Nd-)jQ`vU[P*%uEh\PvE.JEӧNF߯NfE\|~RU½:iO`9=D2p2)| V`g$EK/KIMQ,Smх (;Ej"%YP/GRtULIbAm3C,'? 7塜<۹%ݔ_or8(D7Ҽ*}߯kM"u>m-K\ b ^?N.S)=eF:LnR]~(dTˣNS^@,9h%יU 2kQSoFueh м:~+Vp8￾yL3^lbyјӆuhKEw:k4>l(tЊKH4#Fвhx8"j/Lzwi> 8QŸg,؈kN%)r{x<^ܙ!ol{JMRTK z{0-\' G\_½QyܰژtW+#wJ#AQǜB!Go,R M%Bxp< ݂G1ϱ뫥[SĠ4 5%JG$1L0$zi&MȊ*x,RQ>RbVԙFkAXHsq- hvCZ.v냢4Pq;yAcQϩ+4f^;!ɢU1&4[E]B\gX:L82f>kj\KKM4<&:ɀbK㩓sL,&CV4x_(be ѩUҾ9:J0ʐI?lRH,)&{C$~X:Ci6;xERc(|_( ւ jK툃#E=Va]KM`l|4}2OU=G20tVC'^CxQ }sW,Sk sE<9ؒbordzSt'C>IMDN&VݔV` Zj( y 6ĸ}/V5y89w˽iT#8⑼zsُӧ( ̸8Rޏ@6Ls;eK&? Ih'[4 PZ#Fc, ~ޭdaׅv:avwTz}+'=*_\nFv#-c̚@Z/0bp*iw^]2B2#S!`Y Np3ONf`m`W#X_j읈&ۊ6W>ck7Bw@[}ˋ5_܉(!}zLDhMs3ʝnBOёt=Hb &sy=eDO^*2Vjkw?”7xQׁ67q (@B)26~u>hH}xL9E~a8m22 Lr /DR$А>O.%wצ_pe$ .jz>SwZGﻲ۟b]S׸X3QF~(3Q_3s*s\WU3S}+.iq.\jP5|0*&öJGj1+VX-Ƭ T$eJKp?WU^']㇓SO.c~wo5-Tksٵ):Vͅ~:{oE 4W)@֡~o6tP5r8rnJ??oyfƋ,n;/e_δ}ཾ-'LKd$I7UT?s)ƑGZH$?ba!73UV˭D޶?&f1= ȏDC/D}< N:6o4 kjX BiX$WZmPc"V RfU ^p:ne/T)qyoh1@+Eɭ CO0xFB@"c%gH]6Mv ]s܋QxIʁ*MV&bt[_MP17A9.^EF4EHx! t1$5i8mhf | ^,Mp"[Awl\x>Ad5aTxt7CD?V%>5Qw-ƱgzT*e״OQڧ<~h:V9HwRs0O_.,2w?̹/(Wawݫ^Ȇ}uݴa4ybtHH0reS0,;mI3EzhU*ʚU PJKa%#+U-bCqԬHTȔ#v%uyץV>P)2oZaw$ar]iro{b`߯GYVdv_7LD7axu9]rىoR|%9\"p)cj3ͧL:i*ͰǪںDcvѸ+E-cFk1.ǵG;U%6/6vB݄W*\f1]TjT\+)x 5`oJśNc@.QG y Rdϗ?.Z8.9Xy?tq %M'C0Fή lV#dS~Y]rQZ&%D%^bhYe)b?@)V7EXeF-߮~r?zU܆g߮;w Gd@j#̇ӹ:K9|8g曋zj8U0Zizt^uC"CSkV*H1Bh3PP3[+.Fu?ؓvGݛGŝWz!k<|yU9roAƷ2g`Ht<17h<9p]7y:D kvP ||+)! &Z%6Co}q ) dR\w1gy(JxDlEWwS+7X$T RSATjm5XNo JNΠ]Ɇ=>?tXV}n&K]bo7M%uՅ cpE%3a'i7fKMJa%jtcQmǪ@$d%Z rL X~u}zȜT*=<nUFOC__ r d<b yHM|50nut|抒 f1@̩ǦL%TCԃ|S@S3uw:嚧 t*YQ}Ȭ|ZT|vgK fIYF4-ꌔ"N'NkSu%#E ޴`!]^$Z r@xz_C,\,xj  R) =UlLu6u}IHFSPaJRumV gdg&S-,F\j[r< !;>Eb#n^5&O{>cG=vm7y&g|4 6o 6gջBeAbXluCl$Rjc0Yη,fMB kʖțēmkBr͕u,T'ńC!#"Zg|6)JV& V!J5Fc\8' 55blL94hRIh+\}Y{ռ.؛K|peǹHRyy-M6s2i{^(n+0G\|0)Cޏx[#ˊy,sCϡIT/"QfwP=1^QZ&yny?Œ3vU֛G#˲~ /u^ ,V4 ")vph+^%k=g\IT?>*cOz(:cN+PxB/; y'+Nl=5j=s;ޗxlz&̇gTn͘!Qόjޔ qyf nH6 \? %qUwZ ^Bu_gf"/{i-`v.NsHtZ7~1?GOB֓?QNs%+7+I$9-bQia3Y(PJF(چG¥Wp^˂.gS_o ,GEY]-dz\sTo_כޔ/yqO_m>ŷj3^M U~m z`֎F{ )bWM_XUKܓ&uH8EL;k֍GI S4K\`~P" J(.vL *z,CRaj QLb[JV EV$888@e:=%=MK#'E{$yKYYkPFiR&FL!Nmͬ=_t0st7 Ȓ.jzvݽ78߅79gwd<7>c! Zx D%~qo3ndhwmmH_l }[87n$%*,C$4+88'$T@4,prDʌ!hƦPԩGxO%Oǧ=Heu y3yʮ)ȳjޖk5Lȓ_cG45B([^Ӗt7 ?K+4 mREF}N!& ȍNExN.;OYwY;l)8 .Ϲ~˧J7vl7"c%ۈmP1E$n3uR/13,DK©&z\=PDfՀF)W..QkאcŸ`R(=0&io|@ H&/8/av&>PqI%h,Si!ܳm'kX5liyYLX;f0iUFEf;+TڠƲ dG'u^Gm,i@qõ^@;w-cte2*oPi$#Vi*0"?jIۓ44ʿfF]^dhC !# L y KJ95$Zzݮ%noKkIJ@R!?εfqB{M%RFiy:-|Q]rYkjGv F #?oۢ?ɶ m}X­^ʶ-zi+o0egaVE9y*fۖɶerEWߵ5e f4-5^>ڡf4?cEFQY!1BưϏ fTZi-ῳ?_ X9xl@kE:Hǥ:[Š8iqdlvG wzM?`v>ݕ!9UxW,e8D~oʌ80wvࢸG'3!9,75gC Tg43aVb, fJgd偯D %.vث'}6 N3Uid!x Hq6VKƬ~FyfմH'+k9:˧>u eem* ڔV,$)~؊!A[!)acK Vjs}8g0̤_$^sewco>2ki.b" vI4-2LYȡVΤNIn4V{NExf*\UԆbAWr>|W8-⛢wh-֖.0fʖ* e|2~ɍ@e]z>h4yGSCH@9>Va;:ox#,+P;½@%O&P巣ၳ (O>xmϜAV,.^}7wS\s&W$ҥվÀf]Yj'lzHKJ kW'fUN-fqjKraP$Ι\u8%О-5ʮGvEVEqu ȭ41$uVb67̤e.(.>Z/RN{Ш&Yzl]켼0VˆF&I?ksZW<1L5Á ^$IN׆LgD`ssE~'G#ZNKC# |%J?/|^lGvt1WIp/_/߻O-iH4?׻'Hbs{Y(C98=*W=ֱ:7#sglBV%qusQNޙD͚P +2L2)d(@rK;%д XԱFM`w%ϟÌqj[`.wqmfY;9%Oo8P0`Bd;]rz42 BIwT7t+عJitmċ9Xb2T`\qɜs҆ȉo)VkY u8C'AIJA&ҥJ=sKsV4VG(0}yWXcsH 9XJmy3lGppͭZ@[B ͤR ;gk?{7"/c*.9>yB\Md5,u s-, ы?ѝt5馸=7v.eKu[zI'O B3{*?q(<hň͔-\%W%[ U|9>ǩ_P-9\6 7ϏyW燹%S.e$\kMY$JIGrl ι^ɖa-F%q?@B8#pnJ n  ӡQI,=O#Uunܺ v (7) n1Bl9j[cDY]Yn{[N\{)1Ѭ<+V .5Y ) u&+Dt<0(8Zk1QO|v6MGTuGoĪФ*4˚#߹nlEQLd,Wܖki,?{OƱ_mw_[՗l#] ȱm)1R<#JCieJ9鮪#/Ƕ5(t]v(Զ]r6|9ԓ$^"lNHMxwb=lvL KƄt5u4 !$'Ql6KfH'\ǥZ<*%e8 mN\7 .-9'D'ߞ$08[BRU'2t7)e++)_gtkeuaOkԱ^:&H%yiT'Y4M%k"HF[Iyzn% sK.$GWoENs{ux6e&ncuB[xPkَIڄ𼹨R{Wj W._3)MMez~J{m]ؽb9b8G5kf~w?d@WNMT|ywk|wHw.h˳|zq@PK< HRFX,yQTN# ֺ7Zp>kwz+|'Nӡ"Pk*qǧ{i9CњwtY5[h`@?ݭI߬R%,oڇ!o$kWvuZ1|i.%VUts<0׽qj8顸1o 7|0 q ߚN~ѲAu/{+"МF< V}5*p7FT$'Vc!pd.1R4Fjtx6*``Vsҫv_{)v6jr^Z}Vl2?OXU?1&# [j,dt J]8lB bJgjupc_#;?~fhEQzw醖D1-7^0=# h Άvg:`}RXClB:M? #Q:|ʤxP([ʘL$/:cyYڑCk|(r]H]מTnc@zwU o_|FK(2S QkBe }>(r\>'|XAcΒ% *\ 0ւ1cft9gtOTeqj(ֹvOo9_gpB*aw(5HUh4;juc‡qx'`҉IjU)Vsb FXijfgR/_z w S5@l9?dTɪ]6&Hي;f𭃤ׇxbvm5%{Rq*(,t$h 0:_x&ͷw'aL S+Nf&Ⱥ[ּƳd=NfrrcPԳAg 9=#q>:3wәZ6$N,GץqUrmH^3sgdM`Qاkם~kfn%zdؽ_%* >|o{OJ oF#52&8yF'CX0E[Nvwp`l ͠ƙ?8ɭY`ZSrhX+}c" qLOʱ*I!hfK"JSg| YG Η4>"+~F98?J Op T$9zEd($jtg{3תx=EAZ mF+ ki`T>>Uthw-EZtQ!ͨ#Ėż!*)ndy]!Fk{1#tY'Fd F_qŁz{1 ] 9=v-V_u%Vx̪hirc w9#ڕ,ljͲ\_:xά[?ɤ%d:?Ӄ ?|0cf+ZY}g-9DdH4BQy R^>.},>Y̕Iqel&߁'$cRQ[Gs!. VmN{ 7m8^:+qG6wxh\2P'jvFAԛ -@vipe~ֺ-pA;xDڄV,ŌPCU$vD,G]bgC%1Qў8!^QM,%A!F YH+`/`*X-jh`Bhn.VopiҙRn8:14Zk2+iF ggB+\$IhOIH¬p@!V&׬[jfDpNjy!V+j8JэEERMk:nFp#ZZ x_q1U wqk+ +Ly4`3^W$?,H_S)F8TGZXc)Xpr}g;F<^rd&H%dVޥ"pePBG3n).%}S3_~d|-\D0Β:TA$z e&Ah0a+ mKHz,}r2u 3uH*NC\|:m(S0iGxa %PGH' yK.2CT/=K @Ś,ShyI;K+bPB!X" [8\]Sbs|3=?$.&N/l(4JpJ fhGjBoVҟ]bjb#ŐU)@2&i EoImp;ShKxR"b'u`jG/=Nfd{Ķvp!}N(wL1EW1.H*1Rmoq֟yJ;Iaώ_ҲMFFwFҲ31ȤC'S/-#CERYF[WE".]Y;$/vd&e;ĉIӀ"]uxÓ*aI[^|1'6-B6ۛ_^?ODy;q8q V1/ly۶Z 8I3v\[Ǐ>o6iHrPY|~yZa h{柎R)|zۅ3 ) uݢpVVlGK4eIV)ٱ30DBN9Wq(J a.gdY5wjgh4/qN +k3:#>.)nIcIL\ll\** @ih"nNb#d?jL0ae+oJ3tD .Pi An7#EKm3!*BB"Э%?;Wut?O.ߑ{ C@ #F,;u1&.^[eO'_P@I6hdI] Xc\J @dC!EV["Cn+9hrQauj7jΩ8P%U̱0"Bǂ Dd `t>H2~1;?l^bU M(ώG}BOugW \)F}5ȻM _2f?ϬFDưT_(Pۘ55557@'kXu,ds~Z[~m[mڙ}>b|_Z͸:<:kȎ5<},R=zeh=) G| ۀL*h;)h\kJxɼ%~N$f\B 3qP(jtkQ.@1(Aea>;.QA4CJ4cop9%+o) i0XJ'5H"ЎsfD "3q>4o֨o>ޏ7')7E(Uոˋ˷K7Do w,>w/mgd*\f5Hvv^X8O_kժ.ȡ^]+PڛzY6'0o~ ):cfQY @B+xRo%<ͷc]>  O7jw;Un@*\;HUZ/눦VnH"9?)HWF;ۣc'.h7MZˇXYdtө;'Bc :*GLO'KFbĜQȢ«LB¯\w,{,] F⤴ wjz$,ΈX&qvCpg52mL7J 6UI)rt-Ra\akэ;QdmEDX0N_r]s#7W=Tfqv*"_l?Pgڒ}WZenoS1|]V}V3^\~<e:qinW%^|beܽ͌u@v|n*rLjۧFȧZjVP\XDކPjOŲʨ>)"e^vwjo+B.lYKCn3|WS[j!ȨTڴ.tYC?:L)8{=d2,#W:٥,S-fY"iHx.0xz2Q*ou6YooooYM8w >w) T.,FY_]lċ<{"4G` `+'aCeR -݊qT0R\m`U{uؑiS-u+/H/f-ûeuq1F.z1o ;l:Fh<~FYȋkK6`i2nő;F<avq0ΘX9ԫSqL4}pfY DP ?4sX}5@P7}J{}M:SvꨉkbA7}NE]̸m4(z PTܨ%\uh)Ze&Vj\C . HI9rhΑ#P:gG9uhSJV`=Z:(| 膬mC&U(ՖGq79h@R+k~ʱ)F5j䧰؉[GAK BitSb 8r-Pp?oFE_HEjilt ڷEQJ~~xm F+4{']Nt;ɔ MsJ \"h8=z=t=yP+գβR7`C q~$e֧%]CW@uݎT;Y*ƖakZIl EAZx1z1)b Y<$6yKDACo) ͘e8P%F2l>61ν9Cd}c1o6jt>R%4*| tnד9l'/25I7׼wT 0̙ :"_6E$`^ UDbu ԊHl';P%4$;/n?A=w:zU!b$2@#nV> qk0c+Jl 68*B ~ĪTj`VǵB-ƾF@8cWP#Z5OuxKM~b\"^B]Lz`Ƕn1^Gj2ZדVei+ 7z<7]Ѧ)h߇R?Rx/?-꘨rcլ<nYAO)؍ E> Td%:Ѱz=WA׈qFb:9hd¾Rv\gj\ Z$/,ՀHzx 5u`i]lۑ?M8FT ~%#~3;<جOόG",aSh{JBDo G[F?h{ ]~-lOoKS Symt^[.זKKzm +HDRW/ر:F҇E!zõTzsUc[/pkzlsI]"7A)}mCQPoD%`gV2?|Id¾ =q=gKY2|ЁpWS/huwB,徤G0#ϽWĖ/Qi|ld08G0Jњ-F=t]͟tYW?1&}/v 5%L ]%ySd2F>Kd(Va^r{˾` U_ѦW,3d,1fQ39decBCP8\ 8E 8J1RL. q<۳ 1KXQ$ \)ୱvK5_U^o>/ߊ[.C$ ={n~D+Yg֧KɝgvUVj &Y ձYGWmYh}\..+8ȩ^%Zl͙<3yvǿ]*Q{x&2[PJ:OxXА]Cv[>Ob[ÐIۦakBlQkEk_/Ď+w CBukJ5Ɖ ʭoJ.5W8]9U㸑 rVg\׮_gț5_qӧ;ɛ =Suo KfΩTYSϼTdjyN 1A#vFxJ u6/3*v_3}.N:v0c:ѦIK}gip!Rǎؙ8$ls!ةiT׶ {JǎD@wmR./,KS5=芹->.ל"Чxo P`+| H]b@`W*q^ yrKRU_wWOEzs7]>xtr؞g6vBt{ߡe.]v=͖ۉ&k P|&{Q{B`€2xXu\lyTG9f;EӼt7$=Mt0%BcJ>&Cz_*%(x Sq>z/i&HQ1- u(U!XA[& UoP1kXU+~g K \bdx=.jR"Ġ6)%U}fbG%Dp׻?VOX~}gzt ~|PZN A݊nZ29c!>pRQkGq.Z%"$ÜIU˙7e8fTe聒Z /T:\rL+GEIQ1Vw])i "l3Նwu:$DR gB `c7D]0@eؠV R|< }5B2@mV3$Zـq$PؽW,|ܛ `ʘ^}}V6P1ڮXt&.R(eʰTZCainw/^d85k??1|<|v7P.;9@|6['2ˤY ]2zʓ{ O) p4?})G] &c9˖߿%tZ 1\%gYQfx!sx S>'!p NxyG9CjXwy.= c^/CSl}p⋅"b6GO*d.ͼۓPXhL.~Z |lTau`qQ&D$"27nTyu@0˸]Z*ąӡ] VU@ed؛f`V$Pz:l7#Zu|3fGBu[o1 Rǫh?[݃ACqw&aYKDy,߮sZ>0Ub&욗h9m֙,P<:=GsEh{@76j!l_r)T+ofj͟2i\eOI (Yb]z=ϳFP=6eeZwTƽ=qoRwlCD.(FMȞࣈ /RN}ȮJ]{H5]N]I>_i׬/&ihӴ)#LsTko~ۯT3YGe>0[,4|]| qwEM'Kp&IX%>R}tr]1_te'etl><` woݑ8lwa}|-.<++tN>9 :d(ke-Xü*c&\d oqT$jX5R'6_x7Jo0Bnc2ێƤ7Qh336|#YuƐ,2e\ZM|zk!d?G#|m(6"=Q=ӟ[}̊jq ֽ8bZOli >WPhg*[z>[֝*2jU`\lE)}{RG_|O>>|Jrrmy~?IQ?Ejdt$I! D\4Zi]rYWp4Irg{!(_ynq='VBa 08 ¼TJs"9QD-K$(FXeFk8ݺZM`NxOEb/&yu JE>2:wyB` 5 #3XY"i.\(΍r" Caƞ4*ST`Ͻs;C%uJ11Taiu=`(&܁hU"ZLMHÓxFMVn,O0aTz$8DxHca<' sDX3Z`48ùL A^ Wy ^xZ/K}eS~,쒔c |<6 >t ܦ6h)xfA;)aH ڳ`Z~Oa&Op| |`rHpSx<)gh7=%ژ]yٻ2IfWlMZu4[WǾ?n 9yx)//Q"݋֗٦$r6λYD XlS ƞ'X$\` t64Lyha#)8gv/o$29-BMEw|ɫKZ\$K3Xȫh]ry8*_.n.r8 tu'cYrxCɿ*pa1B5 c"" #͏Tf@/w Iʆ(=tЈ4CR>5nl@z{-f4rPrŢkѴ]&bMH>jD^7wop()a!֬K0u3`Egw5?:7rٮ1 (;#Zfh 2V' {4T́q]Ra jRbvuzvJ=ZHZ`GbFͱ45ǒ\#>`h0kHXPa(O!$C@q4՛2vmdHɐH|󆘯v2;9fiԬ~:3nI͙^rMCM|փ'nȵRMʰQH-VMQ`I*cn w-zRΤe9.d$BI[,v}/mc seÑqvmlf$0:ܶNkT/9m #dhۺ-"Vyhk񫴍_%m.$#V+{7ʐN!$C@12#&At3AXMD\]OE"f̖~S83*Y!Sxm^?wk6yOpQ;Dwrpi 4z\\?J`.J!?~W,8?P@Iqص'8-'%7!1 =(bo D%.]] - '7p ;EBTLҀDq~W߽>=sy,Y~Iÿʔ^Jn}MbnUY'eZiM30g&GڭiX3_6c`K31ODN>',I\As<u|(BGMt V2Sro sF;BwroʥAQ{J0TϐI|-}RX%3FS؊!Ws`[6![e [6Dih +(4b9SsoI a`sdQSCAi9z9i'8zA㧓jz@Gtf$t88ai(ȊkE)udtn_3DK`'= s ;S [5^te [ E:d.,]%CݞLXM- 5O"#'~ZN4ZD^l]Q25N_(dɎ GLYie:a>2vJk>rxJI*wOXegykdǵ3B0)Ulxz_ 3}wb1pj3?$T#DcHZ!tVDzp`'8ժ#Υp@( +wu SNNSAD*1^*nUհ:m(gGh%MQ pYdW$^.^H!}IH oi@Ahov,7b2w:fj]_@YKU>q{נ&rnF@"Qi쯙@ȭ=RFGޗ.14/ftX> 6_/c4 6~yZTsm.l=nJ?̾r> ٞe6A,&A T^|Evw.oQ!_ʖt%Q"YUdZ' *?- V7+!T@"eK1aQ%ꕆSKE >7 +#8|M, цjCRyͨ*^k-=Ҹ߹ճ_#|d("`aRA }Bi׿ 5kAՌ$Pu.{HBaޑ0*`;mR0I%śVKhޣZn CBBz=B#-}ND.i'2*2'yoϭ݆>~}t@.XW-N?;aĊj;Upy=skY>8{&\ptel/J>⳱ 9e/d_eUuCM8_n Q?pþWHqM")GME0.hg#5'oDJ_9Z֌F$ukFf56!s!Jr R`R<$Jˬ!f190`bS 1W`fJLJJ0tG1'-eR'7.`}?*L#MCx,J5*2$D0mHIhX#xdBP 'đSo@؁aarw&RIrje'*`!6dq Y?'8~)4=b &CqH)^gXeּnq5() 53}}$mnͳߤ]W*75_= lj_N_F?U_-*#>:jY3{1?o"6<{s {5:fw; fpQg}fg}y^Rr=~~|=/|u J_ʿs |":I;ܙcmߚ<:;;}`lw]#uF<DZgI0 j. ʅrH e"vS: tyg~w96S-^ڬf:{w1? Lgn<~?x`^[秥_aMS#0ߏz|1`dB&/_l Sl׋|)*^n^ GO6="e|_h3e,m\Ŏޜg6 >J̰}Vs fx cbr LBelă9R1JcA3jʭa%cÂ,%:fE+E.3Fߏ'1ѻt8 #Eb@2u%f ?%X>lE\,F2n<3KYOC8# 'ginɁr.9ی= >?=㿉jJpɕԓ!l#WuNW`1R)j3Y(lLie~$i X֯_Z3qeWƌʍ?uGW'.:%]i6b(uQEQ7*K(5-1Qasr 0PX%0¤߁X hkE3}e1OU-:ķNTFToL^pL *i dh[!씍z ]:r,nNcUTm?79l?7)a [5V5!]rjW‘]>DK*$:q ٷnq_8"X)uсZJI4HlG—gh)kr%<oxcf9v(A@

mb'6a6h#'u҈ύ6h#6N}yATKh1Rr Zc)T\|:2b^C.˿TBT}vqƣ#+eJ G*?]'{d l!&q,&zT"5"QA+e:T!pe ;F7hѣq))e03&+/V,o3>ɪMr*W#$2@Qn|[W+֕~[.=@1A"f&?ÞhH hKYM ND,[x}V9s&v];^1"-U2'=85N󅗚;_6Qz%8I͛ѭ1U5}Ϫ15F\.LnV$(Q$ydFPIq(T[˵l{3W|e{0яeMvFl-HqFlF77M$,2~YgԖKd53s)/zKNPKY@<K 9ؿ#dL [vۂn5ަLszb);p){d~1,  B1O3-aV(*e$PJh1aR!8wig94.)9pYD,_w1`,4> : pb+4@Q(Rb,`m0\ƱF 5;QJbj)gTm6\2m)g eRw&X*CY(%DABxgbq(Q# l$]V'L}%8‚2)ǟ:4>p/-s[$ٗ'O\cir"={ [ 7F`[tV֫W_h+ߏw7!* RNxQT $hmQj* /T7Vц)&C"˜ęa)*AŲ,hJ*N{Gsڙ>=jx+ &s#I6&l$Fl$J{gl("i"L(PDāi0ǑzXaԈɃs)RehO{nc+ix+8u!l0G'7e~<7gDۢǐ@}0`"_TQq'N8#:[LD\E\$#^;ҞL>rrxx[F<ذot?c pg!u_]P$KIG ۛ;|~u8ZzJ뾓Oq. dxMOpZsհ Qkb1}U,c7nC>5|Q*yN 'H boAPݓ.G{n! ~ E^8' R@g@ӂ80h@\LVPU!@6~|误ZZ CT6 xJ5_u{{f|v*vo.'p)`+oUHA}Im+lBckpV̻faվVy[q:gP;%~0Vfi{ hdլL+OQ4B9U%2NihtqI1;4ntF'D'Sz|B$&dz)zh89lGNN.T6# )Չw>y[᧱ݑXa%3) >|x;):j,gvCH"9LoS&I& 8*5CwlN=TͣK߾jI4/o0N~0 }[?vq(|qvyYrK0oZ+kL19:y`2V ʗ1oЏ>[agF!^"L4rdUm^2)$pْ-&ƕ7Z%{ P"Y".'J'0d{u4YfYo[[#$vjZ)3DW Z惧"\գ8h6NEvr gQb)^Lۭv}d(D/>JNbo77tм?eqnK{dT!= ISjT'6"4aGt-UOr#@õ* ՅO ͇ҁ ս Zl7snhTnhSq2n}T VEt#Ce܎y] QHZXe$$I[IF!W0VNתrqd>5# G Y}bZwwm @Xh7ֽ`}U;v쿮nXFo]gWn3&=B~x"BpXluyx{nonVݯ}[n.s~s1pOkvo!W/ zoY6;m1{$YtO9_z<wz7IwLlF9ItN˻fD^Js\xjMIա'U18gzOuAxO`9r[pvAÿR-/uKn̫WMr&8m=yZAm4Ôg!8>O rrg-Y̯|-z1?x!+ΆRڢXrۏi/_!gd,#L cJ`6NeXpz>8jUZt{&؀8<PmEɞРZO/'״ItwY !4U킠ƱJ)^e]c۹P` WgeNe=w7=CAcjօ*~5LlԺM>tchQ!ZH[~$f#Ju P~Wn;^4(;3DP>'q>&Bܟ, `)'splRT:~C# =t~F&C72Ngr(k8DcEC<.2p>#KI,DL qJ䕁Il~]no.=hOM45jt%e\SPlΐ(yD``jʔI 'g 3Isv]N7q Va U]UYҶe9PvE2T%VZhFJXLK)*Z#&7m֎djAJ.RjBY!IdF͹`Ec~+F}6c8#'kj>`sd?IOTTKk#)N֮z%kx!F蘗ݘL f,q)kh}\baw!{n#x 'dpZl#IѵQ?±rV1KE<=`42:`F!JH0rY$9߀!gѾ&Yi0V9|d5!qL7ʀqTA3@-s' K-|8fIڞKmϴ % ֕g Lu;r/v|}zQ6Sju]}eEYZIѪ1QI - 2X} IM\>`#-noSq|M%4Bcow hZsdZ>a.l ,f"{嘕"¡%+<زB.TPX[͎fL`.CijgP 0ڮBv>jKa E6ьojnM&؂'8C\bKB$BpLF"?܇V&D T$"͞%T8hEl%Abf@N!HO#g1 y\G>}0NtQաZeg6N}h);t5ahi!AChlt*6tDi PkF Y Yʁ VKP:l06YF-U^L|PH%J2l2Z`+x&\jc6N6AuZmiU<*n <ʴcFbs_ kyIJVxcAHe@@.8QX7^HI!o-t@Vvw/:筳[yQAθK#ɍV gD7\ϩh@ėYC*]g}m?joszj}w]8'ftMbNljvA`""ڰz_cq`sq+}GBй1 v[=5hJl5&E[`ajQyעXBc@ϛ28zNM8$I-E|u-\~\nzyqE'f}7bwn{=6_e r؏pٲ{٫$\ۏgp57CEfGO|T)$КH)Eb.xv1gf6DX ]o#7WٝŔ0 =`7;$Ugdˑ1}ɒlXWe|w (F,vsoR$ jUqZ(Bم?}cۨfE9j]*[&o}W : }_hv; !e߮+LaZw$'F}- Pwr*7DNܫ.DG Ɋ4+G 9+"/{YȔ8> SBv~Bb֓eV17+{ʍޣ?hތZ(́|Iؼ_2h c]3ƄYN#OwjfᦺjG*so6g'elkO{c-/n6ϳ?Ž2-"}Յ"W(VOK>r[D'vZ, ")Dk%&vgnjjy3YX?.)4&P(0+CHTjyF;Ԧ^1ևq4c_߅^Bm׷MH&cMD( /.YhCC8YU VHeAyPfSLYH<7!E"œsg<ŕ >8UBYH9nť:0]MKŰ<ǟ/'*T7 U `5P{ JC 0\T/}_ofKz2 C$>,_-g[cC܇[^nMa F7_oKد!fy6_[ƖL_yپ/o6ApHc3@Yx]W$[OGn]Ǚ۰773&|~Lo{nܥ__C^u[y pnGhLzMu>nrFn<1h/! i&ڭ E4Iys -1:F6)Bپ[2ڭ EtO"P`zrqzqzp*Hl,~g02V/_V+}4\)DxNc + YM+I}!/A 11%rob` %R@)2 y'tF[VzJe&4bS0Q`Bg$+!8_8*X׿BД$aРՓhx'LpEB+p.b_H +G.OwP~)qK'_J1Z7Fd(@P)F]ƉpՖy _ݬo(;VO=WUy~qdHV?5Q~k )y򁴳^ysóhó>mrOv'mA|Lp)N7z%XJ-fks;}!F~lgzi?Ja2.MP!,4:AĆړPz[pZ@YV,oK|>I+xD#fjPI}Hv 8<Lٽ=;jGs5.1pl$b1st̅5X8-.q[_=Ffjj{_(?b"u67I wJDDefpi2Zc!Q+UZR$#J3܋d$+#vnbVȫtQQ:P㋕'pZS LdO.|_ߐPRDNP̉g2B`2*=MS"ɤ$,oc͏mf唒\T?Ez3Df 3~m&q65ϛ%Cۋ|^cQFG 7Lc%FbOMX>J 55j>b][˳W1F]s7&HI_rl1 pass2X!3 v@#'R+N2'YC2(-4kI[RfXJp* ˙Uz1`b^ Hj<) qTLr[}L vɄ(0pnp;&rίKh- c1bkؗ d"Z.mk*˩1g7?~xW:ɉTHfDIX4B ͹r9ZIC Ֆ ̉(DŽ""C]<Qi%I?rT M= 2z~ bgGY(7fku r$=tԌS#i2ǀze\St3mP/IZR-uH,#selxu'iJ;q4\\ۧ &Tq;6u/ݷ#krC4^0iK?LvR՚2̱)ŕ̨cMV쵺=QY^qz  [뚯ۗNR Z1rs{}Wr\{3;|c^bt㩌1B1&9r8y}Ř4.J~+v) .x//rkƘ2YE(d*SRQ|V.To^Ol=q[O}õǡ8G ҅Qh58C-NQw$[Rw-\f0 Bʩi&l@}8:)[׀*q̭ AQ=OtQFWMňpңŝog/akʎ~k|Hjp۸rufKhyzp1dP^tAXB;9Wty+ t&{+BЃ:.zec(Vk=h"+8\Q4 f|N1;)Li F1oiԀxID9Fʺ̶L@; H BHs 59qQu[$ȔPQS\r5Hpm?kDM77_InĂrE ʈbѕ'1X`'Yt.*$-V*IʯZ%UNfJAd9_i:$ ԰0kwe5?\R.?AW%hg߽yGߠd>"ބ`X 7Q*xV?dov2ѺOCL/Rt~{lgytwwuqs]  `JI&Ӱ$qRfX$ RB;T&TkP)  ҪiǰDTRS0jٯ4 'c$T@";U`惧`ACp)>LQH2ci.c 1佮R$T@RrmI; .em J; r2&r*~&2d2+ HcN A㯒.Kʑ"~է2* T@)5rԌn3Lft㋕JdM9?`gٻhN PMWgf{8'FXSb٦x)(@;3ۨto!~R8_Ďm>.u"t7?eWCK8 m @6ˌDpCӱMcLl] ZNeviV&.U^(hW i@ .MU <7u{L:F#nRо禇OE6BBqM) E 3( }gb39`{䈄M~'&*N01 =Ap C!@w$Aݛ`Bb"MrPH ~HydDhe*q~߭7Ie t&#L2MImPe)ƕ*ŌRT#&) d8SE|s5c<{$r<$)W ٕI`]F8BNgbr 165M#ƭsLRhz}ضÏ*#x@i +6K޻L880dpڊ4ĩVײaMWm 3 #DKn\{Oo-ڭ]d}T<<݆n_"lSֿnVg>zK+7~{ \oNO!#`JҩsxQ8 υhR/&wa\U}9 *tx?_xkG"o[d#2~zO'X6q <8nޫGMg1bzDO icV>Q11(z**Ţ) !&^l`8$u/xH_ŏ=X/i'l ۋG3CDvaE[ {鶺O"Fu(IP@\Mt b i~?RC@^ A"rm^d<4cd^J} X^=y󫗂 |f%0%^_Y&/{0xS!mǬlb}>>zGP E7@nqeGo// jq9"96BFN aK?1$8eךR`K}mH/RwR:m(‚IAcnt)W6y֢%|Mf E`3< ԝ{ﵣUjÎڨW§(5Vqa*/8!cf +Jb&1m,bTyPPXীq]B)}*mwIMK濩ɮ"Οlvfv|c>'!}3zufs8tj _Cėi "Mii3XJS-NJ^3[ $I(!)_V屩[$ݛG? E=[u&O{g =)G_1I{JXb=X06q W~<0Z#c 5ڃeI_M&J T(cO{LR߁n;6c,"RaeWQE;V>}c׏d<͒i^jp" Bϴ`RqeDD-;b D(r2|Kg,F>2H?}+DcAچZI‚xmH n`;*Jn|L0GZ1èAȺ`p VH(O5GRde>H? ~k1D 1:F=V FFʟ{'BaY^2E" i"tXz$5CQ|DPqMra8" jo4H1b|2JS8)KSM"`[i # *6`ƅ8V5;U10k1IK3(JD0:p~ O6[i`T!u%VRUv@1zD-(5YuS߽cj.|QBҨSҾ8aMj4v*A 8'25‰#xpӋ2p|Rq2БXqyQao(q]'{ܰpAP;/ǞRKvn؏3(4X9D*̺n0-"j,2<"~KzL6h'<'yXi#jl88S֛} r:8aJq Gg3,VM~g\ہ4vӏU{ tb<` b̒YUhQ'色ڶNZZurг?/[YstP&`j@ qq2utC 'Gl@VPv (FHx.`W~/u9[e8eWe-^~;3sjM׆hp4'aSWM$T! )ue6՝8ϰ>{)S݈(eJ]#$z%'6kJK#TJ31UcB̘(N %ўؐ;?ًQͥ` KZRma&_kkxJU{:)*M F|ޛe*"n?evo?irRÈa!+e%UC,8Y)9W +% F(+L v ޅaE&n=Rj]+ѫ0[T`ULWgRe3_?FC"H~4. %ȯcI>C>%9TJ$[^dqXbu5."ݺL1l`'5ۋg+E0*fcs& $IĂcֳԂ\Uu `dZɋuӎl1W1! /!9pI# E@rMc|5eJ*{͊ty K9( c:ܭ˽*L?wL< 堽'v()WZ(ZX# CƁ^9<;?o{3ſ7FOegO=Xe_1iU` -e&o^&(`Q@8C~2T,ɗ +4afofzlcH̝N%R 3t3a0 柦V^->Ia4ޛMV ?(27`eIDTbvx&*fx Wɰ5 XѪx薩,mzits8W!E[Y,:J[&+(C%7a{Ҙ q>סO?}/e&Vw1hoB=]D ֆ.{YV?U`CΊ1C{#w * z>Ӭw$gLZْ͒wku^ruf8?eM'k¨%DjBrnga<%*rT"k6[X5H]j G?O[Mx}.O+پίbcwE%=qoފiCCǽgܕ FnwݔbXc) k-_H ~U-V <ňǑ,E{5Grbc=1ڶS|1jJQ{t}J[Va@o5 W1 <BVbdXQk W\z5j =/.}'Q7JӉ A}i<1Tx'^|jcfF2.nuF'[&1t!Vw.xAپVeu<^fru,4ϩe-+VKB3ELi~+ɖvPڭ.eDnUX͑錧AivR]_A5mmm㉆8ҵfOn(6pboK\3a%a=)ku+5:ޫPxcVHk A5iMXHЩyS eEYhc]f"X$g?j iW׮z1#i-zb+!5d\+FR- JcZ`pc18ncK h7N&FBkЛ `) 9^HR|¥CڀJdX@Ȭ|ҽ#w0B^CKQОÕ-A-I ,Cc=6J;D`JD{CwaﲓU$! 4Xc#-cC>kXL8gc1PXSSHJpBG'H<"A{KUj%9%ο{X@ e7Oh~Ѕv /Xz*SZzɁw~:نwkq(a-b-)2fzirΉ8:+"p(#؂%]y)G)4p*ŨlѷrO=QNGHOzk|wrP Fi0aJUDj̈́nm8A%M.URHBƍ"R\C^[ tR8 D3ϮӚH xad0t|"yKcfPj0قǵqвMI2[x%B3}r<)T8|hzh -9j$ǐ',%dώM*㗲-vM*PP=ٸ $>@d`i$ ]M [-HweTw#yPn f8SWNvsg|gj*-yLsED$ mQj5KnRAJ$|p\pE儃XtSmh$|&&L8ra;RA(bo- ߒb T3pVXIЄR#[PUqKZPuHqhk 8bۆu(B"$ӆQa1a(v"0LL}l ᎠH J_Z\•+^+ې v-nf-ےI%o]SfXYXq׬ !FUh7*.;d13\=4E ֏ _'iwCw7OE'C Η3"RO05w=sx@g!(:x~HJ}~m[wMt r8KBJN!))ED|y9gcRPd&클M^jdzjd{$B̕4*ionۗB5dN~Qo`vc^:`pA+U}XiBGT*e\ .) {5|[m$sS۰B@m5ߊWU߁߂ Viu c{B,tz2Ab,BT=X}ipuSrmvW+(& -V @ABmJ?j HD2AE}Ч pxOQۛ=\B G+U BF_3oTӛ +l|˟z(F* L_ͨͨ%b0TH'Ϊ>_k?= $0TFoDDB!j͑)c/PP-j4_3-­9|]/afI=>w> !$0 iC%@CD2BciPj$YH͎04DZm͈W#TO)2jnM,Qq}rڊ'fFmH)ay@QBax*Q3/xa¨R8E&*3f1ȶ!!RMj<V>LA3Әo$#}-{oX& k`x[q@K6tF r*UAxW[J@ޜ=0ЊʿW9pk({ϸ_bJJ4; eCC!TRi) 澈XыE{@/<& P+_c|i!Ð$Ʋ!tF Pj06$G.8/]qCPJ"bQ"XG0Q"f*`T'{ }a(b lcȕkQu !:#B3YQ!{XHRFtPmR9l\mi䱵Q(JjZ.ԓ'Zņ+[%i8'ˢ>V7 gPy_;FEx,^6bxJ*efZnOy^O"SQ v6YҟE{Bn7Ix7 `0sdїFgQhfD|6o6|u s&N rl/_ߟCusBkha{1 МRqY\茭;7T Ux=zkƟyuay..bkhv3x2&ye]*kk29{.[sx8]4:%96Ҝ/l˘Ϗy;\ЄndJF{\z,C" lP]rS:2B/%z)ҝ1% pB|8%BuL59 k?GNSj1ZHQrJS^]WH:Q]ϕT' s9)(u3'e8j!LJc减qyڮʋI@fhޖ2yYy>βU^|zShy!ײ7?+تיGz!?7 # .!"!4B`D˶$ïZV91yGA[$OfzO#  Ɖ("_*$amkÐEf8T,Ҷ)(Jb:!}QA(A;W۫<$}*k~^ڟ.# $2>!>(]IjC?1c*+Iͼ:H8G2*PPBۨYVw[`e[w o^D6HqBSkᜳJ 3O81v0Gۨk/\WC:-JU[jߖڻ9ٔ>XCA.kjd\Wo'_y i[̈́pUygIt9;HWRmY$2>In!l=i~X]PcmjcxdK3W󥵫p_FOg:xFjsuqzQH~8sck,]t n,ő'ϟ]+3pb!j%`>1wyܰ5u=?pKC@[z# *r+ec2H{_ጀNQǺJ)#C 9wzɃ! /^ta&uw0Q7AKI|Sgϒy8i[fJ,Yjgv}(\Lb0ȑIio"}MX儳Z4QNlTϒv;!WlՏ*ii7F7k`ﭱ-Q!Pۏo߷Q2[XB$Loq׏OG}bFP}jcA b^}X}6 X ί:[MEjc^5d('RN^#$%;H6eÊkh2=8As7DscnsWK66d{C~V:LÉ'V]GF71~dqa痵W0j/4|ߦA_$x#=Qԇ{t"FV ^S%:; "oU{&3y+Ć 4\VJKm+ Pdº YnXYn@,7"Q GQ1Lprkd#lhFHV-'?YeFGݿEт<۩'|usvNJγMxlN4I,*oDڮD p'?䑩!,?[¥3>)-Ğ5w2qP=*O ru9t"C̷ny>{RPqƑVO5&{+<8E ?9 f# %Sm~^3BBrBXjt̯, Q=ݩۦdѳgPQų ۦPiX)7v֝`-H*fsz{{O@g@&6uVEп-kwKCjf^3:LpDܷgo7e޾Nz(6'NEfX%)n쩽x[ 48[R4D͛0:N U#ןŝRepAnogMMf6٥*j^.6:὿J31;}C!h=UUS$q Ĕ^ԱW 0]3a>E x#y@$\a/=yŇz-D6KE[F?]߿$"qHzlƆBASnݑg.J]rXk8#|55 7B Xw 0W2,y",+dBc^hHJ9t'1v6]ȻgTtK|(bpSu#A'jCBa{kâƛV8%̕_Gp-zn|8Qs*.튩=sneƻ14KP6p3Jp j"p FZAL;t81Ґ#"o=0bƖ!w$5q́GΏPҟ&wr$fUZ\$YY&H9GVg38>sпa'J:9bVr^HpFc$}hY6J&(yMͨhkG2d3ѻ[yawQ6؉*th|?oj6aԾ2%c)<;B=|ČcGh@ xSd]p0g;G:[XScI;\'pGӇw!>J0w~+ꝯ-!cGM=C}~T m7!8x\u2#$#Skm ݡ2s]ᾒ)zf=|FM;̊J0]/"pG!k7O o>m72isG3tk: S}mێHK* Ϛ[kx۽wbut2ԚOe,FzG `=$}|0R5A;d!/F{}5v̶/䫞楹48q_p=Xv9 =3/= ,q-\-|VN#cvYblV?z֌)rA-Dei" i1B .?2䳮X3vPِ_.N:V] +3*F(LIWLɐ>9!؜7Lcs:E]˘v4f(26R56GU6g>λQ0Ox;㳪9N1Y5W\mHiL`![?:'4 e8hL08LagMБj1lZrD*Wvh'[α&( *a@!9%UCy7T6] zh7 VB]oYQ'a@$i5Xl`$HDzfӞn\`gc``+ ٔ(|հ 1,vj5wtxQVQ:z@]+i d`>G;rXblCzf9E&lgS-eFl-˃6o=6'DnׯȨ̈+de6R=LBmt'lVF0B{b~U 4䏻TX}D_/K7v`׫^jTB* 0NUGD(A4B P$ DZrDƴWR ^A#A%Tb!!)!vJ%i %) nzm+!pz0 *zd?C!CdAGK}w?|  CW7ܡ =Y׆f6P'7=  Wڠ1$NK)gojRNjW !@}\Pn"{6MmBQ[+3x-Am\h=o'664VbH R{Ckqb.IC d}>A/YW=-/(%WFV ]C]VUVqLCɕ3y<@if2keԣ:ڬ&XH#R&<<^+``SÛ[C!A51L&1:!4Aoz[OsS~8~u}!Jc$"$NN&a4@J)MPlAQ20RD2`/Gm M}#7q F{Hɭ|,3/"]2 hW*4PI[E6-HP;P0 Rq@dǼ_;;R{(FM{0wؙx4+;pD'xH1n9mQIQ8_ ^A Z)V{S&a4͍Y*T.,̳#W%:]t i񝹿Z\"9<>Gڽ<1Yw (`PI$&r*|!\$=r_N#nje'HM'YJya)`s"}px~MMG6w=mCɒnDzmqI0Kۘj pJH|eTIj3;+kGQ7Ȏ/>*웹h~'Gz%_>[_vQ~ zo 4:fY|ëա7t+z:M66{c/ l`xh쭩y<{ǔĈ ("g$MDiF"J4 ɘ@S`I4 ',AP1xZIޫ̺xg *xY0loǴ?Jb<菂QgC #}L{?vcR`b I='4o%})B2XD=꡾RLh˃L`tfiaE?9FQW-LnY/&;NKOO}ڟ:,jYI4dq?\G0OENc!%aLe׍%3k/EB4tӻMk_KO_o\6š9<} ,?Vz֥(YuB{{?mvymPƈ$-b_l^I+Pe4_]Oҁ6/:i#q;'ږ^>V.mp;J|Q5-T d:өB2BHw~}9 8/9*V2dy θI5g' V4@HblIkRj{s{U?O{U{۷ w#s1yj@TxDU` BĩJ u$*c,RŨIYR%`$m ҐfvcYh,#@MY2ci6!V5tFƢ/Eأ)jB` %00RIH WP(P &48dq$@G*aB%q;[KA ~ۆ}B0b>٢b>X˚Z-uNxj5qAg|VAZ_lL۷vXm.1?9WXϲKFgcBҬjM̴ۭLS{*c&N.Lv|(L?5sE.=wX9ů ?F2M7P#_ǍYa^~`9ٍap $ۣdn+iMHKbjY`+X؀|" S ~lb`%ܠpE*ڑ:0KGQA]m4kk̡wV3R5@ sm6'$ UBO$ wN1o">爫솽#7oYpˤ*T{c"mXbs*9id#E6[&r Y_B\nջ?lfPTBAJ^`P4^& t"S -7L9YRD&3ZT)Ţ\'0,]rJХkhnDHks I@P (k4޶] ֐e}VAiV 7Μv O דebF.Hj^z_&P%<.qxSnȑUjRhZ5} U Wk KWJi cocB(xt$ LbκuMs7`֥k07)IK*+*{y߿{eS1);m;PTCfn #)%i}٘-27w~:j5/C:N~kܳU9EWA"ȏ99bqE"$."AߋU@R G:f1F̠1C6|Ly\;Jta@Pc!uSbLD8vrఊj[X}6 VqγcrlL |^ 9gCݚ1!Oz'!Qz>fc^(~U(ZgKC!rh.bX8+RmuҒ e<JP%1Mxڹlr <xڦ<muړLF#=`N/EښAƆA贵x1Sxq ~ '$tZoâd248o(gN ц9m6 G7 }kEIAEHB;hLQs' ɶ;75t^:ѲWwM(8~XOjɏ(KI֗1׫fnf?o?!%zH:N_Nd2_gs4u*TNAZ,߯48Sʙ CM1 ɭrR $+W=l`v6*: $l"}a{M;.VݦGH=0•N,F3s,B(atOڂAs̫$ïH_9mMfwԀ*ܥQSnH0Zst3i2 ujoOMTH}2}=ee{Zl%a|Ŕ>wsEcLϊ||70ǫ3#<^:ڥxt{ |şBş3f6zY-%;?C0}cpD]h~.PDQ o=TH*NepIJJ)ZcHDIŤRa&]J5%LظN0'B-"(N:/ .ӇyD#.>[G\~j;_f7O {a_,4 v4ucHEv`$$t~5ckGh ^((aA ,$N e` 91Źrby@;3oGA䚏uqz@FPdN^j/J֨q98KO]vVObʬ@7+%ǜk L#ֆ]9QW"j7nN$FCKDkբE"jYk])PP8IlGpM8j.:G,P]^\$o.`ݲ4%D*ƆZw *aMQ}3"JZZcmaC./.bTswL߄&ltGo<']4+Xp4Gow|~ag_`neH2V{+qZU=n`ֱ FK-n0j&Jx WE$꣎i}_^ƚL8š:GyAw_& ˸vn9cTJȜrxSsw*' 7# %3ѲRq% .L&jχA03 ?ۇ[mybmb~JA:B`I&Aϊ4Ùoi)?D-FvF^ (z&My֌ hBۛ䫫L4! Exb琏khUԛUE߿Ƿ? ɥ܍^,efauMOM`f Gn2{Kv],Z],۝4ϓJ\dL>s:EDDuț 8"E4uC %"C+/چr}gv%a٬5jU^5 i=:$eHZti*Pʈ8E{-[o5l)1R*]>Klne8G)"6L'@:I'[.ɰ ,'ɠKs\z"F W' ?X %j9h@ƵKc1T7& 6$:Huėji9%pI 9bQoIF9:!$NRŘ.ǩ%e?N|K:"sQv.m#12&/BX`:+=eKǐ#FY,U9 v"#F>JqfIe{ՠREMA"E=;] c1E2>OH0"܏b~.SU/ RRLBI.aVDHYhw1%4z[3*{lr//x8Vvǧ-"x̬rl aF|GLyPtjlf2D*X)fb9HL+ːҙB$JVD~2bK"I% k1aHc1r0Yqg)y kG$ER93bŸyqs*f<#I멡͉ cv$*)C/I.1qfBӅJR36&A ْzwc6>[P)R9Ҥ;$a%)L"cp2 K,ScG,STǪ<˜:92܊nkD/ eice%2Bڂ#4a ZkKN\D X QX J #X:FHɄ(Ԡ, .&3>t':%Wtbg'U /y.o=?=² "xBԝk1UKL_k f';xKџ&Z;Xijk4x۱v'ߣ8W'"~{"Z(e#9>kRGw8m6WFLRLf_dzPr)yI?QoB$F#E/,lQQ27SĈhԌ2y"I?hDLV)҆ E1 %2EBfTQHzF#2ĩn A1(w()b$q鬨µum`i]vұUkkZsOfSbֶ1.˩A$Y%V(0SpSƬd<NF`!K!iqv&s?"33j@kUc o/ї\ZU!ԏO+0Og7crFst.dB7uˑ>]>,զif>|=?,k/˾+h|NS_|ZXsV싋-FϞnYgl!#р#JZs:;`p1Re}ŗBhze1QI$nuD!Fk\|Y]0q+uؠ2A%m ۞9p)!8 :rz<%*:$jt,yQҋ֤ȫ=]=Ct@4f!o-r>ǫ(Rǀ njCXג/2pp?%aL-tƇz>, &3׳V#ҙM T}Z .QzcR%:Z6ъD+0ҼdZ?%;ަǟ(1]8FdZ+g#;V4_bw?fiT*L++4 *EڝRT}*U@KfCŝѼO LYuato5)b u[ш.Zw?B|[GhD;׼/B J!?r{৅8p̮mmRhAm2D"AGpV?40U}9=@Zek 4E: `k95ΜNCe+96(Uis jOPNqjjVcZ2r@ZkBz]LnXJ22%2 ƌ[q!??{֍_. $[,Eil!ikɮ,MȎm:G DŽpb($LD`.O.K|0V ;H:ťdȴ$cs+++Ԃ&~qrtuQi.KSBA""x^Gy%@)Un"b{ZC/q劂8p#Tea]&tYN gx P)W"LVƺƍ}gaV(kt:& u4c:} w\;IJ=ֆHW5[+;/ڂ^zW/)~Y[<;xv^;n bZRf CW uJC^*@HѼ*+"=SnRԀVP Ć*M7#`Qb/)hQvXt geK>oajC~/!X/G[sm@g?aqi N SN$OK+b$cT+e 3%_Fi@)$ 9BzE#'4 m$.YƔD#" "ERS]8՚ +vD:_5MF#D2^3"Y3,nӴDjQ:9Zp{t+hT/O?}xMqrj{UYUA?0}דsoGw0o-C kEpX &EqUCAaAWu~Uc׼|E#ktv5Ds)ؾOn|W=7~'g%8S^ fJr^e~H<1w1 I(nz ^ScT(el_穕h>6 p!l.ƛ@Z8+|ohhkEk)JK sT,fԟݔh\5UB]e"·ۿaqq} g*7Ͻ )9.ˁM4IF}XA& rDxIGOX!n2AJq+UcU'POz/b5 L9{ӳX7 œ-j2$Q/1AzMJX v*܍z)+*aRu 8 <,݅E<ğC'‡YJ,.<1­&Y<ܩDFUڒeJd*LCZ{\ĻԈuvb13H3},|K.NՂS޵Z,#Zn12\ND ^zCYӋ>O+8-!A/P4ppZB6ZзI]d,e] +|vlF,8=pXӧѹ?6hO.C#O 81BdZʁK1Agqt%F{L 6<䒓]'*a'LH_$cPڸ"`R3cR <x᠒>cA-`bf/EDUE1oW%VDА_ mܭ^[FlȭބH F)ط< zcy/LC rw"1d%E#[]ϩk0j1>BB6uEaafwG~[6hO\p]=z6 L.Ŗ^G[l${Z}. {S"Y ݃=^h覟؉V@TɃD]R}w;JM{e.I'4tg1JMz&MĿw@Tlt4(/}-Չ#-@t'X2דa NE ϙgAQ f ΃,i "*v|񚯟AAZ]e|oP?On +AVڃu]7![A11"Q)(<+9;kLVP*P5fvLT>Ls--sXZ3ADP.!eB1>?h@N;ȎAuw}9Ice;&v&Y޻%ͷ{zvflT߽5;V4$}{#Q-_pvU f~".ltaaOh^>wF7uuw27QVt۽5H^Mq@&cp°wqc*h8%8,{!pcv\V0G uwXq1tth܁XLһYx{ix B= DNz#hN3!.>ǏS{5E&'w2^6Us*/|0ykN{!Cq}8٫xF~iQj"/u88aC UNm-\G5mUOܧZ-_SBAx3*!H?>yR)!B\b ds V+sX$'Q~Xئs]x? H (P<$eX(#QKȫYj}n8 VOM0#WR$Y HV%vGZ2f,&d _vycjaɖcu:BH'Eըr`J#xIHdW]"N`O ;b4Ub 6 @xX+l%= 7R30b`>(*^M쌌"ׄ78fSJ$U3.?rcGrw@~x+n,O[ - >(>;E_EI%[t148]7j[AwWwuz\Kx~ȢŽƼ􌰃-+X֥|r #䍕zmUO!ƪ@8)RArs0zYG^v[0թSmaz7ȠB^؋D~7\`m >~2J.فf|Iw.u+,,,KdoƢH"0"4ybk1,&py-ͱLY*|}I73/3(n&"Y؇iīifx2 diK-Votu wy{)e/ d*iuByotzi;&}'gўO}]o9W|Ŷwa1Y3 >_dˑd2߯(ђ-[}dh5*z; vzgOPPnBv+URf^ ,]~~\M/o{FȞbCE')s2TB[9*DJSR4qmV*zxkLCwRJ$#E*:ԧ>!%ᡢ>$'JMY|E?,N%4WB S 82+;ǽqܓK ^ȈD_''LB@B TN鱀A3yMO,閐@ɷUlBy6:HO(z0R$yBud6:/)zm߿yN/ۡ#巷coO^N^Oh[؛ B5G0.&փ0L̄(Q40΃5(=QO8 F>c +֐懞 RF - Z"՟=Oo+BJWc1;f80`\0C,7 aȨrl`UUBTY'\4礴vV(1`~Ɔ+n9X2++hܴU } X&%QV\pb'gevk*d뷋]>+e9`x:]Nuۢ0h>1d!gcv3U|p/:[ľ^)vCo.ۙ}Wo|/|cf݁1^ :!9 })>MS";yq}YߗU'[ ~ki d:9Y񽙓ÌadEf>UȑXuNٗ)#8M'f A:d =8sN;̠,2+/1ĀNV Ѳ\t2KVN$'C8caqƮ t3[cLM8DoJ%)g֠ɋW's^Ɓ*+DkOU29Ф-_ awhq$$SNˁ,G;#$AD 1rd"~Ls7Biu+NN#ahb-GrY;]L{Ax~q{J(vRiqu9FU4F|vTOg}hwC]o 1w5vi!Ms]y= ߾,-c3 h +c[s1}/:B5ٕ}#SnL9F9=ALIG{.c+ g(O3x D78_uk5B:kirWWt198tq~7>[.ovzO(prd}ZX{༫ c~4٤[󲝔X~fx{ޭQCϧHO~ 7D}dl4ZY50Oc[sN}+>8+xF7g4w['\-“fb`oŽOZ@S,PHJ8U䫽[W=ug=Ɂ:n{j?WO8 ǁ.ƄAP^) `A?78zM3*8j} 1azW!$@q_S4^I8-\+Z(DžV8a0QM'0Enu\\%^ξ{Clq|ASL m%¸ rђ7׿ ׅ b6p2>k%oh U?yy / q :./~(Πd͡xVeEpnx$*㺯%?"(T{g٬ѢYEĺ+@JYp%N)zcCB "n!2-d "u֬kl|b:ͦCֆ><ˣ'hLX- l5Zea2:?N0`8_j/pPW@}FD%dIaYr"W "x{km4Fs4Z=Py?PEG'/5Ai'kI@N{0O{*D;UZƙ\GB0-PM".lچlamlUhܨ[!@ֵ+P|.$#Fn[OoM֎۔ e)J{%t|[8:E _\(qh>E A wFfXur{lQ- #550oDAi%H5uX%5e: HP^;!"t(&\![=_>|h\_fMq9Lj_& ﵩ__o^,?M_)fP{oꖄ^ikᶈ?ئ*zXlQ/jOW(@% ճ: |x`jG0 jp"{BEt5j6 ʹs\jrotJ^l2ioޮ$@>~{6,S  6bNaw5&/|7pٶkם[o-pSj8+iqfqJ,{iH:0X Kв\W#I>f_*!I`\G ja$u=< 1ЃttS(T'hUn[ &e5a-/ҏgS_X.E>})KTsB59zsSqWZQ1k{dܻ[9tU\V9SiCB1nW2j0qZA T")UI( VeFnkJ*pBy!t>'ebik!EݢUIueI8,9+mFfi/eqroJ/(dntlpbB%8Nmfpϳ_bkv0\}ٟN?b7!'F޻+A!z2&˲7&>ϚJ> ꂥ 7,qxKlz!h!j+l"Y wz)DC\ekGR4.@ ['8&N)Hrh/*qt6ɣKT󗿮s[tB*pDy( dҥ♂v% ! Aլk@]#AP1*3A(h8A Em@7I=@Vzq里Zpk{R܇[&}Wg*p?I@o"' RҢI#q|u+%0{-3Ce(m5R¶ᝨD/~FʽYYϴ]&[-G:F$ ԳqDSruCPqUHN/ t4p, R#z = =PXcTRaU%8 \6 AY.§0b},|JP3CAQA((Mwm;*CM5eeGل +9vU=\RoWN*nK/#ѪZMK:*+*A\h}K!UF8;&v7)U "D'%)эF$nX$hh *+Sͻ"Fj6_~-HHLrA*1P.SD_hKF`)I"spHV)ׂ3PĨVGKHp1q RNڙ EvL6rlU}q8Ӿ?V]/ZךJjM `}v{@joS0O:|A7/n=YM ?'T:EЗP5mg9ǩh'9 hNtRdF&e $М-ns1l1dDk;_dMf ٧uZ釛fq_%);wy~γ*|?UW3% 2[67j̋lT)!}[|wmmhfy>E7KabK$;MKI:X#qNNQ$5ȗ$F{LF%wZ(L APYtC1f>"LÉ5$ ^) b,(Bj׉4> t Kd& &e=2Cqp)v&QR:@u2{}k͟||O>V:)ъ_77swO&E|H?3yYj{31ӏqӸOB~C4?,l"DϭWEdՇ~1i7-aB;)˵ O=u⢈#q8TCӐu%#UkG4yg˱{zsLvY7V7!H~hl%A j =\Y,h1S%A@tS BH= !+:L%Mm7 ~ɦ9 ?hKiQ|%;i1u%{iW-7_;:vMeon{a}rRYN=~'.˔ID+MIXY_rnSb~+S&⮖/ILтFlG[ ȁ}Bq">めx<$T&Z1 [ lvOZGU .GN& vMP `s\cCM'}5r4 扙'Q=>G+^Tp9VH(հZ! I<0ٔ86 ^Mk;xQ0a[K.^nÆNTҘ]YlMa7seJ "/m@9k̀R3*qmf@eͳR#:w]CJ]J(u[yOp|ֻ9mPR|&zi)W:?(X5F@u s֭G׻8+3./)g=V2p $~\0sDEu0@d2tn~ɵ)wb*o'hUeOnNN,-h1{Ih'm~fŖA-^0\“iJ*˂*4ë[-/۫Y"VtVX}vX@#1L@6ϖ KRM[h"UH[F*TRY6BP?Ԍo]I|/bKV4.^`mmn4wQNۃmx^=>֌{la卲ede:v4mWf`LWC;paJJ~ Pm&c2dVo[O]m=msw.V%ޯDeɹ՗ZjV-+瘯\/KЏm3 W0s30 L?} P㵾>+6Z 9!U,IVec(ۏQ|TT3Cx1wo!@|= U`GU WiڪJ,>Ԝ7"lڶIԉ&mE`:v=R;s>Y]zql%[lҨ7fWʴgվ 7WӂJ-Ae25 ˁ](!i2W ȮР V)WD#,'BaT9JR0T1nVJ`0"iz*J!tݶ~dcdDbY3#gkIcrh??^NT-m5Gnzr1*( *+-d4Trl@P9v#%d,j6wZ,D5.ۿ7l-ѥR RE] *);j 2"._+"˽?J!u]粧Y.~ΪKWDJj|A(R@pV]v#WE@Wk@f}j3Y* mh{RS5@F~5|F!*nne_Dd:/ZWj5t)Qt50;רIj_ @Hw5/;`պIEl2ӲPWܢxi %ޭJn+Q0ݪ{! U߭5n Uؕ6^TU-'>B/te+s"*wlxϸ75̎G(;HƋx++'J_ɃпeWNёgQѬO_nkԛϏ(tz2:q`o'0L{~1_D/"<p< 6d e qℰfY $u>BZm9SPG~|;q0^'* Gpt:3C^jC{gfv;1znˬlfoWco%72;hMO!(wr^-p4ƣnd/a WqQ<ޙ)4|8o_&~"rέ#O)0hlY" Sro'K+͡ .e0o?}߾z$,=ޞRJ_љC܈0ô:(Iay@Lji *Ĭ c'\{ HJGuY<% }?<&;@G-^pwou?x=~o֢[33d$J/9/!L-)RIsǓ]x?~vq$~Pb?E*DkGeJFuI|+RQ]ZE d3a`feҁrѠpPJihF473#AK*Tܖ(X?zŬX13++VymW{" 8(Ќpp&$ kN0S 'xMWx1!(w^K)&Y@ID&2e#gf`,`98j&Z F5L"RjP*fg#2\)sd`ñ S*^sTn^`3HAcʃ7heWYc1`~l9oq{͹ιO<~noϿ?ߛ#>D<=&o }a+96:d8M*Q f*E;, *[y=4A`R-8$R:F*&?hD@}.('>dl~oY?ztڏvŏ,x̹go?ǟBI05vgA`'Ǝ?<3gH-h[b=+ PTirf`*Aq 5!BYk 8!D kQ@faFa9dRp ʍ %%B HiFwĄx@KШHL),8_ȺeYvɐ!/@#$uјI mq (EX|Ds0G !D)Pw3p9!R.d@:!#xSj]rA9h!ѕa$x8Wu JȘ f9sX 6Bp+$^,VsHD$\yFnozc``/Npbw{iC{lzP+$[ꎈ',^@ 8 B̐Ҁ р@eZs,sbL!'Li_j&g45+3Ψd&"ipȘ4l)j$TY}*^a9/ۛ+3#b61rwv'wK|. <Yw9!>|N=0 @ipfo1@x/<z? t?^] Gg}z=?;Wf:_);%`?ގF|াTHNT (px;0nzZq0[LU34{N}Lǜ~[)!D2Na>G&Z!. fq#0cVg(xQd^Z32.oHQL.s|\*gM? onS?VӜC҇\a/?E^B &"Q+&-sSq5 I^PoXO\B닑"Ĉ+K:ჰBc '$QNU) 1\`&#' $EL/;sv9& PN,}5kZuvse?#Q\ÿA0=Fj&xw?ȍg{ Ζ9Ԅr0+TzoKtƠ`Q;|~^ 58zyp~ d蠉 ]JA"Ϲx*]AS Q=.59Й4Pģp]&li㾦>\~l,P/. Nq}Ҏur5hJvʎاbbXt7AgcYx jLrT =!quW'\X5;;KD@#x_G*aT}"וQ ۲q^.0\Dap_Mef,ȹ C2M,Tb-YSVe6*p7Χ6q>]moG+,]{1u%bM# E%q߷zHI73CÁ%;su'sڝ7( &+c )ݏjB][KSigK Q.X9QynaD)XFUNQ !ʫ! U%!NjI" ˋ6 Vbp^`Dp!:rmKRU* ҀyBA# J@QCTt{ͩ鑥djCdgo@ʐI26e͇g~Zhב|g,!d5hՔ2qD 2ژdϖ@ K٭M P?Ƽ*aD47ZSc{mc6@ɓ7]IjZRt4FECkY[#TH ɳ>: 9>)GL=7-Ӿ^8_* qC/ZrW^{Y;չ…G&T}UP{|[bs}%/^ ]q݊vM!c(o_Ef鮐M#r!~}YrDߞlFw~܋F3gTyNU ^(kY!(A 0x$db_{z̷5O I0#=i

8˙?}dw~bP Ӽ͑Ϭ.(U!e3A8F,R8 /đQP2rY ez94|Քr!j綌B>}7 S 0 0CIW+fwv;Kғ ^I;p@P7 SVgDjh=t7C6_-~5xlc$L +IEɀ6?V?K3U)^?ƝƑFИzN)mx 81PO%1isҖV]BYj!q@$Ժpmw t0,yKAiEopd!߹X)Ѓo~-Fci-+F@YJTȅa_?OM9 JV[奶Rx"\0Q,ť((Z\-3KV{ex7HxG~<:r]ob!XRW\^YZ#bD_A 5N6a̎9NE #';)ax6,sgcI?9{rGJj۩]KIWcmc?89Eݖh4,a,eD.|iRDVl~7{r5n̙ŭޢ?eOۏ>G<=U&o· 9o B_Y;f_ z5瑗{9{Ud9Y?FR%B[=HCcTA$ӓ4w}pASMN3]1!juZ^r\u+~u{DI2}1}iϝqr·nxf4cy'd\9 WxeKyep痳[cw|a ʂ-u!E))@1Q3! 8eOC|=pYl>8:sHpy|b[ϼӏk$k&kk)U~(^r֖m_* *qz䥅 o^=.^(գ^=Jգ^=W/p:Z}ޖLh<ǂD:Rgee e ƪXrVi5+."z5x;MV~eĠDzAn?lxT˼s0 ISط`6U,Myz{y>D+·V]1v3;g&3;}\PTɮv9w{{Q_9wY䣯D7tC=퀳+Ԁ ʨp> dA3 3ikf2+t>KEXRBPc0+! ' "&Ҳ@6:4 "ogɜz$9H 1B0w͟s@3@$NW~ꋬ|Qg|QgW. 5f/ .(w4[#K4'SA"Y-'jwr:uXWfKJD1{$+ljTTmRQQZ};B#Z#I4Bh(JkC(4FJJtC/X9R+Ei(citf4-Uȵ) RQ2x`9Fܥճ@@K+qh!;uNKdN $*VHR;Dr];flZj JYtj RrQDC RʢC@Dl[= D>Jno[@~] |1}%X!HΔJ4jˣ(ơ=*T:zbw"_5!rOH}b*“fv\ J_iB}\O TwhC5'E(!OlH쮮|ToOM,U ׍:B(iJiqiB,F) D8K!%S#m!Z'fTǔr 8Gn'\@ےRx \EpMN5ϤsPr)ԫ3܁wbfqO:u >lRUFDK`_\(7PQP !UE[nt0r- #F m@!yԊ8HSLя[NBQFK2&ҩ6USm2B n*9Sg mAHMC5RT޵qB `J 1,8~bgwc&ToMrHB(%t]]U*hC:sP1rUEcsp۪w*N%b{sA6DXRzsG|T1 j\=2:FkVg9ڟ{ QB5F2x9$6'yhۨ 'zau<&Ȧ l<XokЕIJ,6`wIԖv^UkVR+PkP6( 9歚:Rx$pP ͕[J\bNa97 950e]$O%y~jL;e-O`\zg9~i:|:w7;z8OnoA!N]lS~-[:dDɒ~fXn-^gՖ!I6οQ2n17 7LK2u91}wR@ŭ_ʎAٰ*, c(M2eLe)&Udg#@bM`=+StTC4Q5`U t:w٩_F`?GKf/q?lD&{=e|D>oV珰幤&D,] ,+R(^@/>$OnZT $̻&E^BxGrVRˑFg#\OH[6JKxapApYI$ iqu&W,|@})gaHa 6+pD2<gwت#ќux5BRɰUYUs{w)5O^/]R~, ,$H_G$;'lO&bZ097"/`29MDr 4g<+r,9U,R^,d4'eCW؜u @a \D썆] 6\xIzE}^ngpK8_jH*[ F\m'蹺 w]"(%Lʸz𕘻:_kއs4Į*IǤeG*% pý[ n1OKsЊ:TZtU4)Asp.53$,-ʟ2?m쇢~|gvs<Įdn3کr$t3`/P dR! &38(\f(w鷓n|vhF CҦҲ"Vr,Lj)əsD*,v7D5v;4W7 ';s!LFโy#S2p+`)2, L)~%RS%Q}ݩ7S\h郵߄+FeKwQh5I:9  Y+aQZmd0!reoop!&^ 6gˠЧ[wUGNVfmIW'WxǑzҶDHmGE;_䳘>Ń4~r߿6u?LL9j"[SuƍU4cŗ\Js wULTr.X 5;?vj_ zPϸ!qPk^ vczKr5=9p޷Yy~;u줤U=M|u|1M{ڶkZS-'k4z@ٺCjJJ8ٞ mcq'=@Q }4 HMYthVk4(4[Ksޖ(:^۶ mGsj^4nnPz2Z/TN`Np_{-WY%6T]USkRnI:A%Hk'<Qʿjp@"2e,߮nWB ibQA#瑟guF.Q_$c'\BX Vz F]he%v<ޯ*tbagys<\ug]r7h^G;4߿Η?bN;|lgi+.*JGv;k8zt"d|[^IV{Wx_=Ś`0 ͐o|?[:# 偑nUGʉAUŵxNoS/k|F~?Չ%1?MH w;Zt]Cxa[,;Sh6td=kOÊ|ׯ f!Xkj%V*mrT s07/ z}.Iܓ@K]ɷ~y\<9h 3QGߎ3>|\( sBhMT'^S=%4p}۫nU9;ܮ߭) ,9D`=HC4ՠ%oF}Zjꐜy }̨y!$W@)3*/"3 Z*S P0*"͊ :BX,mJL7S%Z G(bZ0FWˡ VLRAlGׇQk%P!bur:T7#"Q!7܁T&f7>Ec!=uި{n::ߨcz}եmk2к51C)5+Kq[7MEX#uF-k: ƧΜbs ¹Q8dTZL%LZ . zS DQR3 R| Eϳ5-9WLHfإ1f17y|Zu~ p/pyE RH¢5wҔ"%жڧ*-G ul^fNg zc,HfX 4FuG<S7Owa0nATS/l9Su5=^W_pRI$YNE59xqEKIJ.-a#| F~pYs{њP]k 8s$Hۛ=iNMͣSkC:iFeaB MD MvLx^S7wElxOJ7%şKR1j҄ %Z"1PDuMD^=9z7+%Pl{ٜ_!xoI:Dra঑O;~19?e:D}WEw6LWwFB58:@evK?OI3W$< sx`#j|\-][pqNpI*uQA ()UjT-ʭTS1K,E)En29&:#Hˈ,S 2U `8nz23+V(V7؏-Fq Z%.E&)K!)NJ)cW&THk !N8⬰ u=kζǥ%WǿrϮcT,|KT_iw}@jJ[ncAZ槄)Ai}XZ*swvy /_>9È ;QDG}[0n?9|۟lA^3]ड3#p|.]wNUMDG65AAD#J[4~­XnMxq-TyI:7lAtѧvVֲ9nBW+7O-XTO ^d&.7ϘzrOJe , s^v0~O^nf`-6D)MGGW+JG (ä/0mG{xi9qd˻%KIQc)ު^nY)"ɰReǠ0W3p1&2]1Ť8cVy9؟u4_Tzdnf **Q$G,}kd\qJ8EYI{"_ t H5VmsƱx RͱvnX<0ӲljbҡΣ̷hCQfo˄ظH /xހ@;EWT:Ѣ yXk{M߱wޑA3N=lN>M<<9&l&%5iI-3?Br { !C(2@ߪ;{ Bt(\v$߶7ޫ 2N8528⎨W6Z՞a%ԮƳ]8^YfXו5u]ab&<붤M1gH}5rI4i:d' S9OZ]K/%&'d:_$EV|;1HߋF VR[dX,&Nų+$ڎl)Ӌ Z\SP\L2pVD\4t+z\; h wؖ$Ed1|o|K*J/=ey㭁8_T )'뼺Xiǵ|YLe?q9whPCj8\1,HVZn5޻xV㽿o0ӧq YSC-É oIMg20#/fi*32ɴ̩][o#r+^}!}eۡ(ۛTIixpf(QKcᇡuWuœ77gm]+Ϻ~OU̙b\biNI[rr;DWKlA5/IEFИHcJ JFLL9xf2 '9mTߵV0~Es6Q\Q\3!ٔ}rMuƄ(pbJ!K:1(Y. ZTbVbڪO2r#mZ-Cm3\&%RGcAaQa#I<LXYv1"LItՎVjhRB\eO:|( -b}EE_yէ~*oKɭ$5s;d܀#O?OsWof~@ό;O%I!^ZRc`-;?tw,:˱,%ᤪ|Rnx}MCimxk P)ZNZu|4 vF4K4vAdі]FrHA 2☗L>0mI^-rhPXi# [dF3nZ^nRj-kb(ŽI~|db-/y/^ {)kRz)K%//\Y|AvSxjWSY%irWߌ{$TXa:ݐ/>oPEe$Xs)Ƴȝ3ti)7Cx9 w17|-](9mryY2E΃Ny_D')7jC3CHga3H!yNq!{+Bnd3[@,Ez} cx]LM n"K݉2m9ĪZȘ*]Qz9u# WErņVpyӥD,2=8ڨiz)H8dI!Ϡޢo|ً`9@bi#@)FlHph75*& Nh=`]4]޸_c[};Aq"$E#uyH8˝cAxFîsrwh0щBPYL'a4T k "_.EqcA< èG9_Uhap`73Voov7yHW oQ1]z~xi̓L-j4#?hx_\QTyڶAr j/ӍN :9=E Ai!!=iA/oչ\x;)^egvTVL)1Kű_)&VOf-[.P?^`ßρmh1 ȲHvⷳf}|ŏw:+q {͇gZ Ԋ1KxWUjE@\>< ntc5L7H/guSziM'%WYx!<ۨk-dA2iɳ쟗= -tӏ43ݟ`o6;#.%kv>_+b1XHWSŵFj>zZ^=۷ZeRhEFWYd[Vv^QƠ=D਷Z?k2ff5a.Vx[@gPJ˾;mv΁V^ fNI@+ z(j>Z8e)+9EWFTA }9)+}3U OA׆7][4|K{aI,Hc8r; 1C-mslꥇ|{!`!>톦oQl@->포kio 2;*4{V=ۆ~nXISuq5jlzzI ,{nmpj %-`s==+DoM=3h:Y^rzSd;ւ!'Ous5)o)M lk?|?%˴rR/,^Y(xQDKr3,lI69#OcfUq]?[ x* })&tf{ UwO^}+rj|)'e>^gZ hF!^;q MDrhjLZl \4]=t1/٫=lcB"\/bS kgN,lH0|"C߄ WXn'[sU5KI`-êV]ꢄU .@Vxg ɏN1UP`A'JHijN [r9WF=<]xڠ)WVv& 0# 03Elp^8B3p)H!0s`z턙fZ}^FO@č&G>xkem t4(V%"I AA+c}v'Hm:[+"IjJ$Ž;bݡւn%卾{K("×ԕ*=}cW-lm! Bʟ(YmćkpW ,?<"w]l1b{2-KoW> տ;nmM߹Y5}!pm I{f\o&ZRߕ{0n)2\-l)?>fk1pG'l y4F?a%/ / / /Lb {) (E@΢.hme krgtJb6(!8NX׏JfY3Rُ[KήxnRMlJ..)$R!HO#aev+#IF=v)Ώ3e$ߟޥI_#!ז%}rdyК@MST'C6vŬ(u]03;f_S6 BO;xLmQhi`b(*Yk,bC+ i(N>m9)iI(Y$4 +rQW>&svLp-0dML0 @s,9*c9(e3Ofnbz˺@ ~I. P5䌝e˅'%' dҌY9)y;:蓱$H9drvPЬ>eo7lxXF4dw9%OC8o٫AږaʐoGrZ !IvتMa(HZŊbc3Ft_Ơ$pylg#Del]'R}epˣ/]Ӹ-[e:KU>IG"DOn$mayvO [P'p+c3[7 s_"&3 hrTGDq,`JI&5(ihR K|Bf@dMf:C"3D* +7<ͭAR>lRU,3V{ reVm s11B 1 4I8Q;缦p@&ѓt 8!=d3(gTdAȬmFs)Ġ!`Ƽ':5O\w b!HBE>'aEr88#)pA% VR"t 'y|8h䨃M@%H"ғ"wP xTHP o'^2&3[b2H)frdQ'Exc29&&2I0S;*7OHd`m;f0pG `Pp>ڶdvK jm[- A͢9ӴrV ֆ1# th{)slI4Ɇb&KC4> b!_sr_X-V .{ -Iٌ}D 6H9`ٽwpь\.ƴ* ^'_߿~斾2#;}l4ѯݑOz$fivϿ7FZç ~Y0'!C8V II,7LC4_gw, t9J ,&x~W͛+9)W#F*0r6ʑdQ$UVُv|cyti,kg |v TV9PinNv^!ހe9( .V/4'x1ϳɫٟ&(J2k#$u D4gйTY\7 4cC=`~T|29 rjt>ZR4hDvΑN98<!Q8RLXzNo!W :04 j`TųBoϳj if4|4\M Kc (Tc-+PYž(b1J4ibei)lia[QamLavlCc%ptӧ ӆ^4~k?Ӡ=lW3۬{ qd#%9$p[mh7WX&=(ieb72sl?*]<+VT_'ϴ^>#J I'^[9{l٣4M,֎ dt:;PMxȘy weqH02-\ '[ޘݵ^ϼxlҼ҄M*TU7EA(D"%UJ5|;@s B?8RNLvZ3|pXbsCyYιdv҄wm`b7r/k`W?,ZKʢczluۦ1=db>57͕9 zQZab9͑U3 en!]ם?zksd:ɑ[3s)\_eͣ&BʴXk9j Dmڜ濮H.' +Ni,-"e_. #)?suur<; ="y#sS*<eNP ]._$I\ ҋ6Ȓ^&(Kڔwf5HX8+@L@=֙:L(Aoz߭;mBo8e%Y=4 ˙rǓl~NLğ\imC@0=?> W=1W 4j$VT!\XUEIL:z D+!90nj'E}5pRkE ,0*SLiI#9̇hd~YM-oK3>5 f,<=]/3:Љڱ_DrR]}kZqhuVgpSrhV<;Og/y>2.2[|9*=ʥ]nh,_y|0 91}ygo4$RgvIZ&iѺ cug3bVMОx%{=rMIC^v)L)qZқޔVHiO\?|7r&PuH+̙t K$\+hqZ.`o N B (ȹ^.rvջ XH4o ¯ -P-a__b\Q%Fz cCI-JAbo! ho/<>m#7 R b ߷D=a Oh:Jh yD45W%:ai aLB%%b:ƃk7Q O\Ac>\bCE{.\?P"_$ kN>BvT8"t5ƒa0mPi=(2!RUVk48-y@Hff~{L-\_^|?iOoyva?~s\;=~QQ;5@wf&IrՏ\[e!i]0+F-ORi8QHEdv,_!e*Ad98 Lt1W( W4$C9\pǘ> *ڀZ DTpCIRw䚑PB(YitTo^t=l Dei:SD9Xd$dq\DRdVZ21[Y,a2LV!#yEc_VCgIf! 5(<3zGTe59Kxjج {RfZ,8sYx zlZ= OSO: 0HG:TR :O]yMUFRUăej~=#m]yW?Xp"}ۿZZauꗯh(f1%DR,G]*WQsN4j L'e*\rAe>^')%@hOił&{ qȿwυGQH7\lIC_DׂZKG0(ixe{": ^2*\JUPԭE_/k˒Eo_BDkDh,]FDf?Xvy}f|y:4xowЎ|A-{Oee>ؼd-9SBQkA(HdѤyE?8/!u 8xQ†*\D9AG)D0T$聦b $Q@uicuHbZ>lǂm ^Tij+)Q|Oɬ`R9.BP\`pFtU@0@2v-INc@Fjب[ewܲx>ehO+W]#Z>}oMߚno8%D#>=vdPj>зdi~gs0˫\~RoN+$%<|D SiSΩ~iNOS\4BB?fiL֦4 <ҴM}_|k7 RjuYP3|Cvw71I˶K(=)I]ba͜lղdpP4?dLFAîyNrH;ouo\1jlQ!CwB=t'M axe:}@Wii-?]&]'lla=}(QSi:Σː z´p9Z Vب h8!dNFP4U)׾|E; !%KJB!:( l%, v5+ KD.P^:Y>szs/ehH>WOcǞ<#TJ[x!T ֹ[PSQ{Lmh/E}/0ʹ`Vk$db1A))I1֧ؖJDHĠVQ0ʪ b4'r Yu9i|ݭ%W깰 86ed"=w]UAPM * !o;{;SBCọ R;ɂo.u9YIX ZԺ`ZyJ%joj]4Hlm8EmHֺśL.h:}JTV.KvӘL) P+}Pd=)JM)#Oke{z*z`Ҟ83H29k"!F袟|+_Oa l@PnXi\3u+$4]dYq Iw6`oDJ?D٫jKJ9ZtY:l=TIQ1{*(J:_Nf>MiOW6>u!;m57ATcBzjUJ@/$$<ynr.mF[= OG@,2e$tѕO $9)nAҭ4+񩙹! *"U|4KЅ l zG~5p{l_|f2Nedt$Š#o~]NH)3E:~I;g\hL4 &Xo9#]``ݴX@]짅-ǖ/Rlye>cC @Ƕ3Vŀ01:cP kB69Lɜ 2Xei+L[pFtW[ju,Lqm,2I[ 랬01N>S؁q[)C, ^8cM1 D< j FbfzeGr| v{457)kt_㒳hy-erЛ\R_P*<j-FFZvK2Qk lŀNVFXNɧ'@SR)T9ӔМ uFRFHN>t.':GBv z_a}n-.Vav\(k!0Q3ץXnߛZ+UrpeSy1}~c]sӿ;J6 !V*oܮY6ܼ 8{^XNˎi65-=/߇tjc9 CM~cc0>(MN:]lri=~Kpb~S2!kaV^z)y{$3@< ƕ؎ lȅR=cu 2[{vujS_94E'Ҿ9C'?Tk43Gv#|T ["clu37- t@ jqZJ Y3JO{x8q߁vnoJdl͎ˑIE'wTTp܎/,Y`9Pُꨫ Rk~?gy\ٳGj -JԎqR -Sj\9-?T'D8E:] ףJi-ɯWvR^Ys}G^j8ʗHoʍ@<Řk:d '%P L5or@8RjU #+~3~2u˓Ӓ(0"FLC k.%8A&C"s468^KJYw>"%1\ڬ( Ug2^-c(&E8 Y4Qo?xOA v 4 Im؉FnÞ-OOL XiN7 1ྣWxܰetIٵy&[޸PDZu'L7Vxr%p"I!6RkXt EWۗx~ =z{ vQC7@:k-om~f3&S*a0BѢ1,ٌQal:U}v+CI&cVudsQk]&b:JYdJN{Н1MEg9$:E254VTȖ-Aes g%o?˜ӆH=#[rUbPjF$$45Мeuzy7~.&Ia8576MDSjYB~[_fuGi ZS Z (xpn8yBZr0u6pC#Яvxݳ=&ܦ$;Zu{.UƌĘn[b+wE.;A ]".clǫw RߵnodO@| ~h\2>4=uΰ$}hz/ئMiJg୦4M! U;1I$N5$8HN952݌Ӱ[^C`a@e.@UIpaU=%]pʅ Hyx}U93޽cty8uՈ:αaU.16.j,`UGlj;cÜՐ( D)PZ+i&bL]ҊWxyh;nv# D>ڻم(V$}Rj/dHI8&¹p׼;61 2R6:: ,M.0%ɫ&Jߵ?>Mv'>; } \^ΰ39CׂC^ %as#SZbm#MX[)J܄p":Yk1PDNFf3\=I?m\mf%7)Pt( !`6NkWy|QݰTE̋j^Otr?e[LxuvU{å"IA:7Pne4BuA<ߐ:7__!bހߝ"8k D1{]7MbyR ^FhGMuObM.$)`A)% p. pt_xyr V1w+۸wfԦp0 hŔ^^xO87=/>'6,?1%%D2:McPqR8J%g%pVMk=UJNMCVAɋilUmvaXt9: N"EX\ˌ[K=~Hcp{!v@\B^FxqM7/U3GG7~WqݏvZ(f^!a)W"dK^w/ˢ!"`.SQtJi_wJdsm9V\<_[_\ҽْukcnONf'Sj8LeKqk ú {1h4?hCҩrS a?WOcU# EAcJR57U|ӼRc~nROuk(Sۊwrep?/WH+@QΨJiCtOD(NkeEOq,5_YBJٞv֔t@FTWJ!. o3'JݨKw Q 3(~; )K&Grq\"8> 6RR}T"$Ti ClHdNI\FGqo 4њ4<7q:c~9{<%rAVw_/ct'f4!MLɜ  mhvs5e9c5'M8a"Q>jg;1P͍&'~,/kp>?4_bGy|Jpʮ>/r?gz^ >y:Ao竓y(@ϥdUzD4Rz J˻"ss:8J9s֩\Ϭrty,wu\pJ+ iILpQ2B"^Jl4 ҁttP8vXkΰ`3y%E;BW;\{T k ?n!!{ 6H1&Th>}:vHnQ̈bƍK3"i 1˺ڑwxjz:K pr(=F+Ϝ#R^֎ҁt``3U>DLDNJuRJg&{/9F) IZ{gPh~x=Dh cxngKƲX14n4444e,o)TY݄_(_(_(_~YҀdq9JVddَZL#+9(\$T"afdE3g 7- L KsO1xਗp ˥R1JW4g&7P:PP:xE>XZ.SJ_K\!dӦ,W\Q|֭(rQBĤNGe@4:`I7;BQ޴D5#8'K*kN6?{;~%K:/ms$6g,n\s`8_:b E&ً&. d^0TTj@e)k1`hѝ韣GYL؝YW 'v֯]F/PGW?|Ÿv|lDb|?R!?h2im%#zÉbBt!NĨ&VQ$bQhUD9o'"ZѪVE*:5 PF90.<_s "L 3&PBgy@rÃp- =9^Ve,h(Vjox;LZ3,Du!kIH%jn/gy4=Y$vZ,~*זj܇J]cm5s3ש4Au"G *d<%ZRE aF*u`bnyI6pCBYu-II:Է Hzl~{jLb&8‹_!_9kwosoo雭 LSt+6>7a':w[FnnݨT MQOZ#oE򏘽gccGޕܿ`fBq\'g}dyR$e?7|=8g)sdA{e]^-Wл.Rz/mj ɚBo^/YOYmkw]#TѵRY), OIi <7˴Sf._yǿs+WQ ^[Uc*s+VQ9)[+ElUf#ssUWe^T%4TF#Yp$CpZ;JGDۧ5O':c9H1. 6 _#CQ!ohG` .`܂wR*l,7Rip bWRbۊ:Ӕt -6f,%])9Y_ɴ/RxW K+kWvp߫¥:skZީ{/T!ꛗbsqƵۻ  }7;cjexPb m"P)B MdHY 3-]Gᘌ2c0j5L8D2S&uHmv RhΈX,-Noa50sP@T)IQ׳@)XZiȥA$]4A#E<'9s&K1Nx-zxF{q3XB1Nf\Q#xFG\޶PZbYClP$&;C;[ELmVQ@j-q1idf)[!'h+$q2"] Y)k)z:)5XXXI՜Z`NxBDKQ!3Tc!i#lHNIt5H)Qqfar<0jP.ǻFCW6zA,Hu#r o h H@ 2iGu?-._jJ[x$c΋LapHSg_jib/9X#L6ŗ"ݱ|%Դ;&Zvߺso޽U_.,<.7bM"8Jg-!=lmox0=X݃3%je ̗R2GUJRsULiSZKWnW 1\˒^eUY [5=4Ft#8y2{iJKM~&{i(;5%OxkhgWTfQt2UWGJRFJ'D(Ks"TRla.0v1DYܿRN0UE)P*Dt{M=PJ'نR`K"U61Ů4T,K/gZW3wC̙]ZqP зTlJ/Tϔ:7:BI1Ut;f匋.t|>-t"5ɝG>e(_~훣ס_1R˃a7#m3/^1t]~7'I^}|ja޵q,׿BKӏj\Eۈ!SMQI=^;+΃3e$[UNUWWçI8~lyjx(_~|wC9?yƒ ?=gx45">/o^!ۏ[AߓVU۝Zf*E+^={\FsO}Ak"_(o>1mXN/D37s?S7CY`"Ey1kn*Ֆ ^p&E5C m  ~gnt<էݯ;CK7缾<9H"^]5xUr副_8:+o^/e$oZw4`݃֍[f Dͱp^} q'3Հ퍁2G&N"+l+ܧs7ґQć6b=`~ܔM7 4l\poxml'#J!i#Tdٔ(Ug3Ԯ5*d坛3_Lfi3߿wfa3̿K`MTsn6e ]G .zƉ 13 I1}i7khf˝cDV LV.%KܚV=+5@q`1*ԕeV+IRkohtۚwl}vҙSǡp5j ߾?CPk8ְP1fcgYoJ"+FR|Cu]WV?~a?e:Cm=Jznf7?85l>ţ'WEĘ5BoZhDI>#sxxj*?GakM 8.{]xZ×MOZ5=> KXې">%G ]UrFE5%҂ S7iC2ţz+q)WF6.__65mrm=.oq$'yOG<ɚe?x6Pz~R*IXJ,2xΖrA`}Y@擁F)%~[7eQ֚2v{v] !)k*aXPjF \tAVì ۫2evI9,Ǎ鹥Azhx6V6b;aONH [1$%ȊysSuɆ٭ܕP^?\.!9+BV2<% Qț S..C5b}I`y(*Ot/2$C("s:ԥv?1Ǥ)%W3+&0ICKhiH]GTMJ`pzw4,֖4WBM 5+:D^DcI9jev. nwu)\*aq\.Nr=ƀ`vofK>*e+!.H@\BL,p 8K8Q5ΪDA{"!+Br1%Á4R:Km,_ϫt9[k^?R/VE)&~z#he%8J-%L'juII N=xK,A0bzE\ۓy`ȥAkt[錨IG: c|֎,JXك#`pxM"4 *l9gi .H[z lTjD|b-o7x) Nx G%^:p" ߠs5¯UGi6B(DL{X0 骤0KNE򙧧y62l*S^[EvE\\j"+ou4 QѾjU0,ZT[HccH -(*x5azps ́rW/޽z\KN&e:"#!U(W&-Cʒ`ݦYD:`jt * p p֜ᐵY v/ ]bK~;pe`x#u̫IAށA-؈E[aP<|4p>|g~`Hem~wAI|}}fr%XVujI=K=<avSme7{^}t=^ST6C+6xswzw >ղ63`L߮5;O l X W>1G.mVMr2i3Z~s|l5k'dlCTnmjq:$$عVi6VA͍Zt,`~IT<۾g:a4R*T< 3|~DB. %CǙf MXώH,\ELxU<рڞG*hKx b8].M^:lj۷5F65jsgus#$(Z#+fbDAH82L y۞gd%tT8ɄV 8F{i6r!1IT㭲k2 S#bQD3ߋ:+hD^H~4u9ik&LB2$$+s *W@_đDZOY#54H'4O7x=~IcmQE'F뤴=4Y#i#$Zh24/oB A%6y5Fnww"phSZ8A@jm^LjUm7!YB60-+:Q!qskZH`a(i$$@ i$&nX靁I*H$$$#&s^V{&j-޿cI{,k<#w09p 6klr<1rɓɵ95pBY+iTǧo߾AOqhլ|G̮cUbG2*!ThYBT#@w <9R:V9ds(ؤ%LҸݱ!a wߞ 2!xIw`Ye8bKp=צb$J),2^3Drgwl e 2=嶋% 'ըyPbЃ.Q.!hlr|B)zjObѷXM>~d[1I3b!8E!w 8EƥAK  k 6ETւxtelB,i!XvsvrykkK/mgcx3L#"arsQIqvlr`+) J wxeTb` Yͧv}ܪDkn+{P"YkZ}n6hJBV %O Ys d NO@6B6Τ(s"SScC΄YQʬж4I3jk@ͦO.%FM[uݫmm2㎬h6]IYĠ TEȰ jCer>ɖ(i#ؐjMFme,BqBf'K%wo *^iZNo_pk0i`M)a$Z0y<4Maøލ<6[1:F(% |4rzka%"G? hgMlhWU;;EvHڃHdv7D }Y#B 7Fd0"006c\GV7q- )߽a+Ļ m}=?шm30tD6"%}l#$PL}.FJCy vK^vRhOUu0+& Vv|)4N%@)mWB;uӽr;HH8a|Mi8ĩG uI1=e7'9X;P"5 UZk=jA88^8tfGiz؃> }`,d.ul6jɍuR!QChToBфLv5bY dv(VnTeaCI07VKRR,aYJ(qvH9hE):J;v)1v`pX[-wsP!(0'p!4cO#a ( 2*1T L,]d[vI1/]8*LΠ=^=8bP\H %\\<+ ɵN9fP ((7 (dզ[h*_9x;wF?HHJT:y⸥~w8n63chpF);;vIZRz*y5$1PaMr+ Q@=VQ@e!S{ n޶B>n!dզq;aAQk0@9V))/ -#D`L 6MvL:-F*\9HB'+0$PqhǑwh& YpHv8TQ?<#=ZR.U;dIǫ5hMёCaoed,:!hF;T=161~(hE y(8IȜ *xl.8T}*=wlOcI_޴oCa~BacDF^#&W[ 껓t<`` \ }Mm=yfdw-[0;XbX<[YO~;z^/_LoY侮oG/&y&봜| 0zJ8/:g"xݸN[d-p{5Lf4糽^;KHvzx'H9x X%V/wS35 j`㕏GU'' tQ?^y۳G_CpTO9KKi`ҧ.bAT16?S@)k!f4W mIh%oNb4^tAgY5$sqv\\^f߯-oGu>{wQXg߫yxgʨsP\9%ޚϿ\E_ڧqj6hUv YH }7kB?JC~z;]%iԆWVpJ:F׈6fOy6au|cޝLȟa%drr~,Y@ 1-0t?|jNܘ RW;2?AYctt-^LyP.5fwIu^u{[w,bzXҫ oPUx-IQ&09EK T"MfS]9HUV=WfT&YBZ-ŕ*X\I)R&1r'\9q>hυ/8#Dz]!ri=IVE0E-xR%_UM.ʪa! v_{tìs#k{^ޕ)Z8p٭u##_;qkzVh-宔ߞ8T=\QKgıt7CnݑoWƃ0?s}nϋ"CE mT~^wQd.NKkFQ<0+C,m†yɢ^~ZO^׭M[W.IJ\P ,vVhH_`7)(Ŀ €I L("kx>줳{4]wIkS֍}B]YƜ2uSOJ=E A{39bi8 @dYYjmE߮_ a?iބض}dw9E./wcPﳸ$_>igD;,04[+RA+*2 48ؑŠiŠਟz1hAg1tCIΒp՟Jެ=:5cz,<圈${:D Z$TQr,YstqP T9u_s9ú|֐r҉]el|S|G!P"m&cC# ,YО}ڔ#Q1!?p{3mwԢV2M WAݣ!1{þő%SchaC5MxZgqEk1Uc{9=L" c V'>WȎ_=/[o\4z'>W*79Mcw7y'>WqKsy[o\>zGvشs#=sǒ[%(~zt80[y#.T,c PɽM & SsMBcǍ_xm[8i4lNm2IEa T}8Y;d^zT~f_<ǹ1ɧjil\r``1'`0^$^DOc.{F5z%] "sY r{?j\5(|`m YsWn9/.Qop/?;i Foy&agdѢ$i q;E#{DpO4$ m<$΀?}̀UѯK8 FSdPuAw1$\->t"W=MǎsL=@j};ΛӎF3xcܗ,$1,Ԃn5n%gmn'`y̹QL< 8RԐ޽_6&8j@f4s)F{V ]s~/bg޽o>6 k $? *@]Mu>wl&K<_4'9-^Bi֧ ey$|Ǣv!֤q9nlWVjc~L^5!X fZO;GwFij_9& 2]\5>lBBvqҶ KX `߂ebzk X n~߹fjs&`j+1#Z/y,@lAᒣkΐGc_ȷ:XLQC,Ů2bbJ*B 錘ky!z#*iIN۹?f#1k2#I,O LΙԠxE{a@1g@ at^q{j޸szZFҥAV@uax*!$MӴd)n\'.0Fs!. ̓*gaX# ,aON~*%  5{JX'+STsg xТ[K }d,XV*+ ]8&{$}*Wtri@oyg^ۉB O,Fz+W*^S2亵{cYiD2L:M}0|Y›tA`!&btE7@-(D|4 IsLw#Y_)v\#-G)nSzB7NW$-ZTL+|tRnl(Y0u)$60DԱf!>tڞǜR߸ 8$>Lkq,?3XZ:8̈c鴖:#.%1Hߣ,B A$Wk(!xK|~T+͚p0IN7VCLƹIh %&E [@Ͳ+$t^ޚDL }>ڕ.$RV(&Se(#(]dwQAeofE #B"q9QjҝZݽS &ĚG 4j2)OANxQTtW6AX(YJ:¥ퟍr) 7g",'Qq6Š' lmV'nYRR'm!a2~XP_SlH@$3$O הU#a׬r}. h5[R)"J ^a82^5edڙWnMr`PCF~rye%wRKt3j1u, VQ.}Z 0ϝu+_P{sSžsWK5^-aAsf u #νd_ ^]=jgs]#Fcs9[dce,rEk1S"17#e;{er4݇t:4t}xg8NrI1 ~Xǎ#NaģW𗒞\4"O_A.PM"TG ^E?6-H>k!yl'{bog<%S5Q%,qE(IpNoj'}WK=wBn\.i=R)dDceXr<+xT4+*Oa><vE@>Bǖnګ?}>XqXk❪iB QnCaW A]ybU4@Gգxp60ۈڕB;\(v`5CTԳ9Ě%r=,)"'% )Q~xr92e'XgƤq\c+@Q=wM  K 7ZɜɌHTLH%2fJ(d>3Xz?bbjx˫.{P E:]_Myw[}͑Zc9J%JAY ņjJyI#n8VHI$YN% &UP Shjp]n6$}-9C͊\g49fk)pKh3:ߍ__@R#OOo.L:C$SFrzݽ;R16⚠m:=p?٧ ̃wcΰ7Z"|8{9ϩP\߾cbZ7mM)G*ȩJ퉫q/8m lr7FΉnj|9[¸ J3jGŋ-°G=7w }TyIswq @cs֪c[ADZI] aMV^Ǡ%}Lk4[GS}?-Glt$@RGB+igje3C~NCXE$yXͥD!w;]v@(F=*(Ի`!gnQ6p3]w tbQŻּ[ޢ@VoSB~nmk 6ZD;ĄFv\!tR\A.K 77K QH] sTUmIՊ+BT  kyluS?Z*i wUޘMjnx譜⠡o_׺o}$.6s )Di%UX9ƫ6pC4u|hP͇6WC >>4./1Di:`:55UbI`+ Fm_ho8rEU6O-Q*Ur{THgfɺPDѱ̓z ^L;p~\Q)"̮Q.eaĨY8AX<tq;lŰd[aqFa{[^ZzyOHAT3p:UցiuXșhMi:_x7LAA>w;aI:l-~@wBDlJW|ލyNb11gTn'^[@VmSr)/vW`Ϊ $vV+ M2ԱHVɫ?„t**J.0#5؇Bf/!2@yvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003631454715136062413017713 0ustar rootrootJan 27 06:46:17 crc systemd[1]: Starting Kubernetes Kubelet... Jan 27 06:46:17 crc restorecon[4675]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:17 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 27 06:46:18 crc restorecon[4675]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 27 06:46:20 crc kubenswrapper[4796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 06:46:20 crc kubenswrapper[4796]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 27 06:46:20 crc kubenswrapper[4796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 06:46:20 crc kubenswrapper[4796]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 06:46:20 crc kubenswrapper[4796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 27 06:46:20 crc kubenswrapper[4796]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.269524 4796 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275138 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275173 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275183 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275193 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275213 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275224 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275234 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275244 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275253 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275262 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275272 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275280 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275289 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275298 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275306 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275314 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275325 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275335 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275345 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275353 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275362 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275371 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275379 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275389 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275397 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275405 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275413 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275421 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275428 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275436 4796 feature_gate.go:330] unrecognized feature gate: Example Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275444 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275452 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275459 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275467 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275475 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275484 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275494 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275503 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275511 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275519 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275527 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275541 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275549 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275580 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275588 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275596 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275604 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275612 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275619 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275627 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275634 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275643 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275650 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275658 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275665 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275673 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275681 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275692 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275699 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275707 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275715 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275723 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275731 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275738 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275746 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275755 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275763 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275770 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275778 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275785 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.275793 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.275939 4796 flags.go:64] FLAG: --address="0.0.0.0" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.275956 4796 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.275973 4796 flags.go:64] FLAG: --anonymous-auth="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.275984 4796 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.275996 4796 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276005 4796 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276017 4796 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276029 4796 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276038 4796 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276048 4796 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276057 4796 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276067 4796 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276076 4796 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276085 4796 flags.go:64] FLAG: --cgroup-root="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276094 4796 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276103 4796 flags.go:64] FLAG: --client-ca-file="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276111 4796 flags.go:64] FLAG: --cloud-config="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276121 4796 flags.go:64] FLAG: --cloud-provider="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276129 4796 flags.go:64] FLAG: --cluster-dns="[]" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276139 4796 flags.go:64] FLAG: --cluster-domain="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276148 4796 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276157 4796 flags.go:64] FLAG: --config-dir="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276166 4796 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276176 4796 flags.go:64] FLAG: --container-log-max-files="5" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276187 4796 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276196 4796 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276205 4796 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276215 4796 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276224 4796 flags.go:64] FLAG: --contention-profiling="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276233 4796 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276242 4796 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276251 4796 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276260 4796 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276271 4796 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276280 4796 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276289 4796 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276298 4796 flags.go:64] FLAG: --enable-load-reader="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276307 4796 flags.go:64] FLAG: --enable-server="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276318 4796 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276329 4796 flags.go:64] FLAG: --event-burst="100" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276339 4796 flags.go:64] FLAG: --event-qps="50" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276348 4796 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276357 4796 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276366 4796 flags.go:64] FLAG: --eviction-hard="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276376 4796 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276385 4796 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276394 4796 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276404 4796 flags.go:64] FLAG: --eviction-soft="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276413 4796 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276422 4796 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276431 4796 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276440 4796 flags.go:64] FLAG: --experimental-mounter-path="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276449 4796 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276459 4796 flags.go:64] FLAG: --fail-swap-on="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276467 4796 flags.go:64] FLAG: --feature-gates="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276478 4796 flags.go:64] FLAG: --file-check-frequency="20s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276487 4796 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276497 4796 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276506 4796 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276516 4796 flags.go:64] FLAG: --healthz-port="10248" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276525 4796 flags.go:64] FLAG: --help="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276540 4796 flags.go:64] FLAG: --hostname-override="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276549 4796 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276581 4796 flags.go:64] FLAG: --http-check-frequency="20s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276591 4796 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276600 4796 flags.go:64] FLAG: --image-credential-provider-config="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276608 4796 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276617 4796 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276626 4796 flags.go:64] FLAG: --image-service-endpoint="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276635 4796 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276643 4796 flags.go:64] FLAG: --kube-api-burst="100" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276652 4796 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276662 4796 flags.go:64] FLAG: --kube-api-qps="50" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276670 4796 flags.go:64] FLAG: --kube-reserved="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276681 4796 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276689 4796 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276699 4796 flags.go:64] FLAG: --kubelet-cgroups="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276708 4796 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276716 4796 flags.go:64] FLAG: --lock-file="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276725 4796 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276735 4796 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276745 4796 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276758 4796 flags.go:64] FLAG: --log-json-split-stream="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276767 4796 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276776 4796 flags.go:64] FLAG: --log-text-split-stream="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276786 4796 flags.go:64] FLAG: --logging-format="text" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276795 4796 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276804 4796 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276813 4796 flags.go:64] FLAG: --manifest-url="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276822 4796 flags.go:64] FLAG: --manifest-url-header="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276834 4796 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276843 4796 flags.go:64] FLAG: --max-open-files="1000000" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276855 4796 flags.go:64] FLAG: --max-pods="110" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276864 4796 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276926 4796 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276937 4796 flags.go:64] FLAG: --memory-manager-policy="None" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276945 4796 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276955 4796 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276964 4796 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276973 4796 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.276993 4796 flags.go:64] FLAG: --node-status-max-images="50" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277002 4796 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277011 4796 flags.go:64] FLAG: --oom-score-adj="-999" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277021 4796 flags.go:64] FLAG: --pod-cidr="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277030 4796 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277043 4796 flags.go:64] FLAG: --pod-manifest-path="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277052 4796 flags.go:64] FLAG: --pod-max-pids="-1" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277061 4796 flags.go:64] FLAG: --pods-per-core="0" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277070 4796 flags.go:64] FLAG: --port="10250" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277079 4796 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277089 4796 flags.go:64] FLAG: --provider-id="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277098 4796 flags.go:64] FLAG: --qos-reserved="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277107 4796 flags.go:64] FLAG: --read-only-port="10255" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277160 4796 flags.go:64] FLAG: --register-node="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277175 4796 flags.go:64] FLAG: --register-schedulable="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277201 4796 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277216 4796 flags.go:64] FLAG: --registry-burst="10" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277225 4796 flags.go:64] FLAG: --registry-qps="5" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277235 4796 flags.go:64] FLAG: --reserved-cpus="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277243 4796 flags.go:64] FLAG: --reserved-memory="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277254 4796 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277264 4796 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277274 4796 flags.go:64] FLAG: --rotate-certificates="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277283 4796 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277292 4796 flags.go:64] FLAG: --runonce="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277301 4796 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277310 4796 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277319 4796 flags.go:64] FLAG: --seccomp-default="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277329 4796 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277337 4796 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277346 4796 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277356 4796 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277364 4796 flags.go:64] FLAG: --storage-driver-password="root" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277373 4796 flags.go:64] FLAG: --storage-driver-secure="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277382 4796 flags.go:64] FLAG: --storage-driver-table="stats" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277391 4796 flags.go:64] FLAG: --storage-driver-user="root" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277400 4796 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277409 4796 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277418 4796 flags.go:64] FLAG: --system-cgroups="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277427 4796 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277441 4796 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277449 4796 flags.go:64] FLAG: --tls-cert-file="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277459 4796 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277469 4796 flags.go:64] FLAG: --tls-min-version="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277478 4796 flags.go:64] FLAG: --tls-private-key-file="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277487 4796 flags.go:64] FLAG: --topology-manager-policy="none" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277497 4796 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277507 4796 flags.go:64] FLAG: --topology-manager-scope="container" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277516 4796 flags.go:64] FLAG: --v="2" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277527 4796 flags.go:64] FLAG: --version="false" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277539 4796 flags.go:64] FLAG: --vmodule="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277577 4796 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.277588 4796 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277796 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277806 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277815 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277823 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277832 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277839 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277850 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277860 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277869 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277879 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277889 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277898 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277907 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.277916 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278022 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278032 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278043 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278053 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278103 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278115 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278124 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278132 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278141 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278149 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278157 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278332 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278342 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278378 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278422 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278433 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278441 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278449 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278458 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278466 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278474 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278481 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278489 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278497 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278504 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278512 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278520 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278528 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278549 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278578 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278585 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278593 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278601 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278609 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278617 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278624 4796 feature_gate.go:330] unrecognized feature gate: Example Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278632 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278640 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278648 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278656 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278663 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278671 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278679 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278689 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278697 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278705 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278714 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278724 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278733 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278743 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278752 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278763 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278771 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278779 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278787 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278795 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.278803 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.278817 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.296843 4796 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.297225 4796 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297283 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297290 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297295 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297300 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297304 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297308 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297312 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297316 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297320 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297324 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297327 4796 feature_gate.go:330] unrecognized feature gate: Example Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297330 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297334 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297337 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297340 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297344 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297348 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297352 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297357 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297362 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297366 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297371 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297374 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297378 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297383 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297388 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297393 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297398 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297403 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297408 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297413 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297417 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297421 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297427 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297431 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297435 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297439 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297443 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297446 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297450 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297453 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297457 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297460 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297464 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297469 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297473 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297478 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297482 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297486 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297489 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297493 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297497 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297500 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297505 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297509 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297512 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297516 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297520 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297523 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297527 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297532 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297539 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297542 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297546 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297550 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297564 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297586 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297589 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297593 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297596 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297600 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.297607 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297707 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297713 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297718 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297722 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297726 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297731 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297734 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297738 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297742 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297746 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297750 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297754 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297757 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297762 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297768 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297772 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297775 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297779 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297784 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297788 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297793 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297797 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297802 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297806 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297809 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297813 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297817 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297820 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297823 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297827 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297831 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297834 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297838 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297841 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297845 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297849 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297852 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297856 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297859 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297864 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297868 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297872 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297876 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297880 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297884 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297889 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297894 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297898 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297902 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297906 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297910 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297914 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297917 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297921 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297924 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297928 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297931 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297935 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297938 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297942 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297945 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297949 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297952 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297956 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297959 4796 feature_gate.go:330] unrecognized feature gate: Example Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297963 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297966 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297969 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297973 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297976 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.297980 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.297986 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.298137 4796 server.go:940] "Client rotation is on, will bootstrap in background" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.303948 4796 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.304375 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.306702 4796 server.go:997] "Starting client certificate rotation" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.306724 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.306947 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-25 11:40:57.754318856 +0000 UTC Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.307073 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.384514 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.387104 4796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.387398 4796 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.443605 4796 log.go:25] "Validated CRI v1 runtime API" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.551166 4796 log.go:25] "Validated CRI v1 image API" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.553577 4796 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.560690 4796 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-27-06-41-38-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.560732 4796 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.575609 4796 manager.go:217] Machine: {Timestamp:2026-01-27 06:46:20.573992705 +0000 UTC m=+1.680960052 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ea2a725c-47df-4291-8c97-fc5620e930c7 BootID:ab5d23f4-0a1a-4348-a4ed-cd82856490af Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fa:1d:ca Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fa:1d:ca Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d5:d9:de Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:32:d5:95 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:94:81:76 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:35:1d:6a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b6:70:79:b2:a5:9d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:86:7c:dd:37:f2:78 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.575847 4796 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.576177 4796 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.577921 4796 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.578160 4796 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.578201 4796 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.579004 4796 topology_manager.go:138] "Creating topology manager with none policy" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.579024 4796 container_manager_linux.go:303] "Creating device plugin manager" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.579799 4796 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.579826 4796 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.580080 4796 state_mem.go:36] "Initialized new in-memory state store" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.580185 4796 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.607092 4796 kubelet.go:418] "Attempting to sync node with API server" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.607158 4796 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.607194 4796 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.607212 4796 kubelet.go:324] "Adding apiserver pod source" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.607232 4796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.634214 4796 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.634763 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.634794 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.634895 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.634913 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.635730 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.647672 4796 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651757 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651803 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651814 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651825 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651841 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651852 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651861 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651876 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651888 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651900 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651917 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.651929 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.654161 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.654912 4796 server.go:1280] "Started kubelet" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.656736 4796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.657285 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.657320 4796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.657423 4796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.657484 4796 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.657590 4796 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.657630 4796 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 27 06:46:20 crc systemd[1]: Started Kubernetes Kubelet. Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.657827 4796 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.657597 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.658177 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="200ms" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.657421 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:24:01.543139736 +0000 UTC Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.658287 4796 factory.go:55] Registering systemd factory Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.658445 4796 factory.go:221] Registration of the systemd container factory successfully Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.658617 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.658658 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.658783 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.658837 4796 factory.go:153] Registering CRI-O factory Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.658866 4796 factory.go:221] Registration of the crio container factory successfully Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.658961 4796 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.659000 4796 factory.go:103] Registering Raw factory Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.659036 4796 manager.go:1196] Started watching for new ooms in manager Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.659871 4796 manager.go:319] Starting recovery of all containers Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.688600 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.688786 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.688810 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.688825 4796 server.go:460] "Adding debug handlers to kubelet server" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.688836 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689001 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689024 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689056 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689076 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689132 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689157 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689189 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689209 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689227 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689258 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689282 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689296 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689310 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689327 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689343 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689364 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689381 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689401 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689417 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689432 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689453 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689480 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689502 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689524 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689568 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689584 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689601 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689632 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689663 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689695 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689710 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689724 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689740 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689760 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689786 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689815 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689839 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689868 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689894 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689922 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689940 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689954 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689973 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.689984 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690006 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690021 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690038 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690061 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690091 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690109 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690132 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690154 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690170 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690192 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690213 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690236 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.690267 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691212 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691434 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691489 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691518 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691540 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691587 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691605 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691628 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691719 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691737 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691799 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691832 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691913 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691946 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691963 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691979 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.691998 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692015 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692184 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692277 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692326 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692375 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692485 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692553 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692578 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692631 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692652 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692669 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692722 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692742 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.692883 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693011 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693048 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693075 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693107 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693263 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693289 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693325 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693379 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693401 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693441 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693455 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693488 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693601 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693628 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693651 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693748 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693827 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693889 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693920 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693951 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.693980 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.694090 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.694119 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.694310 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.694411 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.694436 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.694452 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.694491 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.694507 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704419 4796 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704484 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704504 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704519 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704537 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704574 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704588 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704640 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704655 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704667 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704679 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704693 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704709 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704723 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704738 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704753 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704766 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704777 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704789 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704801 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704813 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704826 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704838 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704850 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704861 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704877 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704889 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704901 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704912 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704925 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704939 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704951 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704965 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704977 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.704986 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705002 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705017 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705042 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705064 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705088 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705106 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705120 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705131 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705148 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705160 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705180 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705194 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705208 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705223 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705236 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705248 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705262 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705274 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705286 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705298 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705311 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705471 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705485 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705498 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705513 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705526 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705542 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705667 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705685 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705701 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705719 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705732 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705751 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705765 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705782 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705797 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705811 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705826 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705844 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705859 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705872 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705885 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705898 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705912 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705932 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705946 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705961 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705981 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.705998 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.706018 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.706034 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.706047 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.706060 4796 reconstruct.go:97] "Volume reconstruction finished" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.706070 4796 reconciler.go:26] "Reconciler: start to sync state" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.705470 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e8395404534c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:46:20.654826695 +0000 UTC m=+1.761794032,LastTimestamp:2026-01-27 06:46:20.654826695 +0000 UTC m=+1.761794032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.710460 4796 manager.go:324] Recovery completed Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.723417 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.725251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.725318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.725332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.743210 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.745799 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.745851 4796 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.745883 4796 kubelet.go:2335] "Starting kubelet main sync loop" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.745935 4796 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 27 06:46:20 crc kubenswrapper[4796]: W0127 06:46:20.747328 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.747423 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.749992 4796 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.750015 4796 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.750036 4796 state_mem.go:36] "Initialized new in-memory state store" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.758628 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.846177 4796 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.858921 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.859481 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="400ms" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.922576 4796 policy_none.go:49] "None policy: Start" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.924186 4796 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 27 06:46:20 crc kubenswrapper[4796]: I0127 06:46:20.924255 4796 state_mem.go:35] "Initializing new in-memory state store" Jan 27 06:46:20 crc kubenswrapper[4796]: E0127 06:46:20.959948 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.046401 4796 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.060070 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.061844 4796 manager.go:334] "Starting Device Plugin manager" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.061916 4796 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.061933 4796 server.go:79] "Starting device plugin registration server" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.062535 4796 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.062582 4796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.062809 4796 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.062917 4796 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.062928 4796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.069480 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.163620 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.165137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.165399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.165414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.165487 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.166252 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.260937 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="800ms" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.366435 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.369034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.369073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.369084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.369109 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.369729 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.446638 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.446845 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.448629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.448694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.448715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.449049 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.449598 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.449701 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.451214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.451271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.451285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.451938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.451990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.452020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.452219 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.452618 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.452751 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.453456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.453503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.453521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.453719 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.453935 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.454006 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.454190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.454265 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.454288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.455235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.455285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.455307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.455442 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.455658 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.455704 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.456481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.456577 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.456481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.456607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.456713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.456735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.456793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.456930 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.456997 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.457307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.457376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.458299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.458359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.458381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.516877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.516954 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.516985 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517011 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517085 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517151 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517204 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517228 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517246 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517264 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517282 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517302 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517319 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517355 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.517394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: W0127 06:46:21.601763 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.601910 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619230 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619332 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619368 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619409 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619444 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619478 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619568 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619526 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619667 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619674 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619697 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619774 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619725 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619697 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619821 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619893 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619745 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619846 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620074 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.619735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620164 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620111 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620260 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620303 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620373 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.620493 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.658790 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:54:22.715253935 +0000 UTC Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.659522 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:21 crc kubenswrapper[4796]: W0127 06:46:21.757200 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.757278 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.770626 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.771654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.771715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.771729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.771762 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.772343 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.783944 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.790774 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: W0127 06:46:21.797448 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.797603 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.811182 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.819297 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: I0127 06:46:21.823583 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 27 06:46:21 crc kubenswrapper[4796]: W0127 06:46:21.861784 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:21 crc kubenswrapper[4796]: E0127 06:46:21.862349 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:21 crc kubenswrapper[4796]: W0127 06:46:21.950877 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c WatchSource:0}: Error finding container a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c: Status 404 returned error can't find the container with id a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c Jan 27 06:46:21 crc kubenswrapper[4796]: W0127 06:46:21.951392 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-84eaa8be2f68af3b9a10e6f0d408d958eb37115e63ca225dda26132c12e64d9e WatchSource:0}: Error finding container 84eaa8be2f68af3b9a10e6f0d408d958eb37115e63ca225dda26132c12e64d9e: Status 404 returned error can't find the container with id 84eaa8be2f68af3b9a10e6f0d408d958eb37115e63ca225dda26132c12e64d9e Jan 27 06:46:21 crc kubenswrapper[4796]: W0127 06:46:21.951997 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-146eb205e549932416ff8ff7733f4441317ec039a93caa78b14b85ec58897fe9 WatchSource:0}: Error finding container 146eb205e549932416ff8ff7733f4441317ec039a93caa78b14b85ec58897fe9: Status 404 returned error can't find the container with id 146eb205e549932416ff8ff7733f4441317ec039a93caa78b14b85ec58897fe9 Jan 27 06:46:21 crc kubenswrapper[4796]: W0127 06:46:21.953106 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-97d1007e6ca8492a59667c1a4e6418e08d4a83ebf565b134b9f22d8bb609745a WatchSource:0}: Error finding container 97d1007e6ca8492a59667c1a4e6418e08d4a83ebf565b134b9f22d8bb609745a: Status 404 returned error can't find the container with id 97d1007e6ca8492a59667c1a4e6418e08d4a83ebf565b134b9f22d8bb609745a Jan 27 06:46:21 crc kubenswrapper[4796]: W0127 06:46:21.954139 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1a64b804a5fcf5f64fc0ecc48daf1ea95d34f1d54bf52908f1ff409a099edb2d WatchSource:0}: Error finding container 1a64b804a5fcf5f64fc0ecc48daf1ea95d34f1d54bf52908f1ff409a099edb2d: Status 404 returned error can't find the container with id 1a64b804a5fcf5f64fc0ecc48daf1ea95d34f1d54bf52908f1ff409a099edb2d Jan 27 06:46:22 crc kubenswrapper[4796]: E0127 06:46:22.062581 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="1.6s" Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.435722 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 06:46:22 crc kubenswrapper[4796]: E0127 06:46:22.437229 4796 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.572751 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.575419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.575464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.575475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.575506 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:46:22 crc kubenswrapper[4796]: E0127 06:46:22.575977 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.659007 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:31:19.706935891 +0000 UTC Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.659440 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.751938 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"97d1007e6ca8492a59667c1a4e6418e08d4a83ebf565b134b9f22d8bb609745a"} Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.752681 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"84eaa8be2f68af3b9a10e6f0d408d958eb37115e63ca225dda26132c12e64d9e"} Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.753282 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c"} Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.755658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"146eb205e549932416ff8ff7733f4441317ec039a93caa78b14b85ec58897fe9"} Jan 27 06:46:22 crc kubenswrapper[4796]: I0127 06:46:22.756658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1a64b804a5fcf5f64fc0ecc48daf1ea95d34f1d54bf52908f1ff409a099edb2d"} Jan 27 06:46:23 crc kubenswrapper[4796]: W0127 06:46:23.492820 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:23 crc kubenswrapper[4796]: E0127 06:46:23.492909 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:23 crc kubenswrapper[4796]: I0127 06:46:23.659344 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:31:07.843091627 +0000 UTC Jan 27 06:46:23 crc kubenswrapper[4796]: I0127 06:46:23.659875 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:23 crc kubenswrapper[4796]: E0127 06:46:23.663436 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="3.2s" Jan 27 06:46:23 crc kubenswrapper[4796]: W0127 06:46:23.689840 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:23 crc kubenswrapper[4796]: E0127 06:46:23.689922 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:23 crc kubenswrapper[4796]: W0127 06:46:23.694479 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:23 crc kubenswrapper[4796]: E0127 06:46:23.694579 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.176340 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.182142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.182195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.182217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.182256 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:46:24 crc kubenswrapper[4796]: E0127 06:46:24.182980 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 27 06:46:24 crc kubenswrapper[4796]: W0127 06:46:24.213204 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:24 crc kubenswrapper[4796]: E0127 06:46:24.213342 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.659589 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:06:44.100163 +0000 UTC Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.659675 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.766455 4796 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b" exitCode=0 Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.766616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b"} Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.767208 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.769385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.769406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.769414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.772996 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466" exitCode=0 Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.773108 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466"} Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.773425 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.775846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.775924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.775945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.778758 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e" exitCode=0 Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.778878 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e"} Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.778947 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.780738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.780787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.780805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.782177 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994" exitCode=0 Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.782275 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994"} Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.782355 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.783621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.783674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.783693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.784370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4"} Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.784438 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b"} Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.787127 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.788175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.788199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:24 crc kubenswrapper[4796]: I0127 06:46:24.788208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.659362 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.660377 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:21:17.439048418 +0000 UTC Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.789993 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec" exitCode=0 Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.790261 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.791027 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec"} Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.791689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.791728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.791745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.795069 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5"} Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.798316 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b"} Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.801078 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05"} Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.803911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a6eebf6f0bf5d7eb7abb8cd624d134d69945b319d695580cf4f540ac6870a1d6"} Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.804025 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.805344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.805402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:25 crc kubenswrapper[4796]: I0127 06:46:25.805421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.659694 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.660614 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:38:12.5770547 +0000 UTC Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.810392 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1"} Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.810482 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b"} Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.813605 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820"} Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.813631 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.814581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.814607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.814616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.816192 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c"} Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.816254 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.816257 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7"} Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.817308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.817326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.817333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.818812 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a" exitCode=0 Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.818891 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.818909 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a"} Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.818964 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.820056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.820087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.820099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.820716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.820750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.820763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:26 crc kubenswrapper[4796]: I0127 06:46:26.824752 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 06:46:26 crc kubenswrapper[4796]: E0127 06:46:26.825730 4796 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:26 crc kubenswrapper[4796]: E0127 06:46:26.865104 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="6.4s" Jan 27 06:46:26 crc kubenswrapper[4796]: E0127 06:46:26.918304 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e8395404534c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:46:20.654826695 +0000 UTC m=+1.761794032,LastTimestamp:2026-01-27 06:46:20.654826695 +0000 UTC m=+1.761794032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:46:27 crc kubenswrapper[4796]: W0127 06:46:27.378896 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:27 crc kubenswrapper[4796]: E0127 06:46:27.378991 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.383321 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.384637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.384661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.384670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.384690 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:46:27 crc kubenswrapper[4796]: E0127 06:46:27.384994 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.166:6443: connect: connection refused" node="crc" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.659366 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.661430 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:43:21.291411401 +0000 UTC Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.823644 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed"} Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.827225 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"18e4e34dfeb515f221d9d4f666df865797a9ffc2d5bd6a4c1ce75c52e07be5a8"} Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.827260 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb"} Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.827300 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.827351 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.827366 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.827415 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.828375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.828412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.828428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.828792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.828825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.829702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.829830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.829917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.829932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:27 crc kubenswrapper[4796]: I0127 06:46:27.830981 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:28 crc kubenswrapper[4796]: W0127 06:46:28.156262 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:28 crc kubenswrapper[4796]: E0127 06:46:28.156363 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:28 crc kubenswrapper[4796]: W0127 06:46:28.397204 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:28 crc kubenswrapper[4796]: E0127 06:46:28.397331 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.660286 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.662447 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:14:29.818758263 +0000 UTC Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.834906 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551"} Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.835018 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.835124 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.835148 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.835176 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.836882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.836909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.836918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.837217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.837277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.837296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.837890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.837946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.837967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:28 crc kubenswrapper[4796]: I0127 06:46:28.920626 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.540318 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.540804 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.540901 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.659927 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.663135 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:59:03.826668706 +0000 UTC Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.842563 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c"} Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.842625 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2"} Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.842687 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.843597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.843647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:29 crc kubenswrapper[4796]: I0127 06:46:29.843659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:30 crc kubenswrapper[4796]: W0127 06:46:30.056342 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.166:6443: connect: connection refused Jan 27 06:46:30 crc kubenswrapper[4796]: E0127 06:46:30.056483 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.166:6443: connect: connection refused" logger="UnhandledError" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.663855 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:37:15.364036483 +0000 UTC Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.848687 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.851298 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="18e4e34dfeb515f221d9d4f666df865797a9ffc2d5bd6a4c1ce75c52e07be5a8" exitCode=255 Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.851405 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"18e4e34dfeb515f221d9d4f666df865797a9ffc2d5bd6a4c1ce75c52e07be5a8"} Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.851463 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.852806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.852849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.852862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.853507 4796 scope.go:117] "RemoveContainer" containerID="18e4e34dfeb515f221d9d4f666df865797a9ffc2d5bd6a4c1ce75c52e07be5a8" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.858368 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d"} Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.858425 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.859256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.859314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:30 crc kubenswrapper[4796]: I0127 06:46:30.859335 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.010461 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 27 06:46:31 crc kubenswrapper[4796]: E0127 06:46:31.069734 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.402063 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.402224 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.403454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.403491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.403503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.665002 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 10:14:37.150148501 +0000 UTC Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.867383 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.870084 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c"} Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.870181 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.870232 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.871333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.871383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.871400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.871437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.871499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:31 crc kubenswrapper[4796]: I0127 06:46:31.871523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.123698 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.123980 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.125251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.125285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.125293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.666012 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:15:41.079366702 +0000 UTC Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.873359 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.873492 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.873642 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.874775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.874831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.874858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.874941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.874971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:32 crc kubenswrapper[4796]: I0127 06:46:32.874990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.585001 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.585340 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.588125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.588200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.588225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.593093 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.666790 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 01:57:31.620147788 +0000 UTC Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.786130 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.788226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.788271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.788283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.788314 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.875910 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.875910 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.876832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.876860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.876868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.877734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.877755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:33 crc kubenswrapper[4796]: I0127 06:46:33.877763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:34 crc kubenswrapper[4796]: I0127 06:46:34.401993 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:46:34 crc kubenswrapper[4796]: I0127 06:46:34.402395 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 27 06:46:34 crc kubenswrapper[4796]: I0127 06:46:34.667911 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 08:42:01.291178306 +0000 UTC Jan 27 06:46:35 crc kubenswrapper[4796]: I0127 06:46:35.019467 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 27 06:46:35 crc kubenswrapper[4796]: I0127 06:46:35.669506 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 15:11:29.191487068 +0000 UTC Jan 27 06:46:36 crc kubenswrapper[4796]: I0127 06:46:36.670172 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:14:27.114260286 +0000 UTC Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.670996 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:31:50.123836833 +0000 UTC Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.809892 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.810183 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.811822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.811884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.811896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.836377 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.836673 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.838685 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.838744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:37 crc kubenswrapper[4796]: I0127 06:46:37.838759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:38 crc kubenswrapper[4796]: I0127 06:46:38.671969 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:16:48.732280731 +0000 UTC Jan 27 06:46:39 crc kubenswrapper[4796]: I0127 06:46:39.672602 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:49:01.145550928 +0000 UTC Jan 27 06:46:40 crc kubenswrapper[4796]: I0127 06:46:40.267253 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 06:46:40 crc kubenswrapper[4796]: I0127 06:46:40.267348 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 06:46:40 crc kubenswrapper[4796]: I0127 06:46:40.272177 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 27 06:46:40 crc kubenswrapper[4796]: I0127 06:46:40.272283 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 27 06:46:40 crc kubenswrapper[4796]: I0127 06:46:40.672929 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 04:28:30.056982546 +0000 UTC Jan 27 06:46:41 crc kubenswrapper[4796]: E0127 06:46:41.069867 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 27 06:46:41 crc kubenswrapper[4796]: I0127 06:46:41.673793 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:33:36.651439089 +0000 UTC Jan 27 06:46:42 crc kubenswrapper[4796]: I0127 06:46:42.374372 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 06:46:42 crc kubenswrapper[4796]: I0127 06:46:42.674337 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:58:58.199605523 +0000 UTC Jan 27 06:46:43 crc kubenswrapper[4796]: I0127 06:46:43.674970 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:45:46.573678316 +0000 UTC Jan 27 06:46:43 crc kubenswrapper[4796]: I0127 06:46:43.868718 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 06:46:43 crc kubenswrapper[4796]: I0127 06:46:43.869172 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.403654 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.403769 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.548753 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.549001 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.549660 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.549759 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.550427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.550470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.550482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.555043 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.675823 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:55:47.862343353 +0000 UTC Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.908934 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.909600 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.909700 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.910381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.910429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:44 crc kubenswrapper[4796]: I0127 06:46:44.910440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.255982 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.258889 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.258989 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.260844 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.267298 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.267683 4796 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.279886 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.279955 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.280057 4796 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.300590 4796 csr.go:261] certificate signing request csr-j9jhf is approved, waiting to be issued Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.312168 4796 csr.go:257] certificate signing request csr-j9jhf is issued Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.623989 4796 apiserver.go:52] "Watching apiserver" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.627269 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.627587 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.627960 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.627967 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.628078 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.628204 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.628241 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.628385 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.628303 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.628596 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.628684 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.629989 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.629990 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.630110 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.630111 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.630281 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.630753 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.630763 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.631175 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.631228 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.659134 4796 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.670746 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.670805 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.670839 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.670867 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.670893 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.670916 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.670936 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.670958 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.670997 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671005 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671021 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671083 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671088 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671103 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671121 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671142 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671157 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671171 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671186 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671192 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671202 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671241 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671268 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671293 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671312 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671320 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671359 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671384 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671407 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671430 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671450 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671470 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671491 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671512 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671531 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671558 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671571 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671596 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671624 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671648 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671669 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671690 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671714 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671741 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671768 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671792 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671793 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671818 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671815 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671844 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671867 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671888 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671909 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671930 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671936 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671943 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671957 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671968 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.671986 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672011 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672012 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672034 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672059 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672047 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672082 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672105 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672127 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672142 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672192 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672217 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672246 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672269 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672292 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672314 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672337 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672392 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672417 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672437 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672460 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672481 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672500 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672525 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672569 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672594 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672603 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672618 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672641 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672665 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672684 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672700 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672702 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672770 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672782 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672854 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672856 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.672717 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673003 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673024 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673041 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673058 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673074 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673089 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673105 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673122 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673141 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673156 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673172 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673188 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673222 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673236 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673251 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673268 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673289 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673308 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673329 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673350 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673373 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673395 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673416 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673433 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673448 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673463 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673478 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673494 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673508 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673669 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673700 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673725 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673751 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673774 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673794 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673816 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673841 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673866 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673000 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673166 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673181 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673250 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673315 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673351 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675341 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673508 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673642 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673719 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673731 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673776 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673835 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673860 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.673987 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.674162 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.674236 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.674593 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.674630 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.674913 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.674921 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.674941 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675172 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675267 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675287 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675352 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675520 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675592 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675642 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675666 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675683 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675701 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675724 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675744 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675763 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675781 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675798 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675816 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675836 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675855 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675872 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675890 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675907 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675923 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675943 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675965 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675982 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675999 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676016 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676032 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676050 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676069 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676113 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676150 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676173 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676195 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676218 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676242 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676265 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676287 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676309 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676333 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676372 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676397 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676420 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676453 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676477 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676499 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676524 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676572 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676599 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676654 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676693 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676725 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676755 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676784 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676811 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676836 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676861 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676886 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676911 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676940 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676966 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676991 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677019 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677046 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677102 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677133 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677161 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677190 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677219 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677252 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677281 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677309 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677337 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677359 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677380 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677403 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677429 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677455 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677482 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677509 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677565 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677664 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678058 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678164 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678209 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675918 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676083 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.675884 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676495 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676713 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676826 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:15:36.784662654 +0000 UTC Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676873 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676818 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.676944 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677175 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677194 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677234 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677507 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.680375 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.680479 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.680526 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.680593 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.680668 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.680734 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.680789 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.680911 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.680957 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.681021 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.681088 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.681130 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.681141 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.681165 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.681578 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.681611 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.681732 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.681899 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.682070 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.682238 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.682390 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.682420 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.682834 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.682992 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683187 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683217 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683291 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683416 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683442 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683572 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683837 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683858 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.683997 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.684030 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.684186 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.684193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.684199 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.684306 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.684724 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.684824 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.684862 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.685002 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.685005 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677849 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677982 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678145 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678209 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678325 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678448 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678484 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678809 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.678986 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.679023 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.679202 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.679156 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.685501 4796 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.679310 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.679311 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.679442 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.679681 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.679772 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.679896 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.685186 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.685363 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.685620 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.686134 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.686284 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.686331 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.686564 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.686684 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.686894 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.687661 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.687734 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.687920 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.687975 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.688039 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.688062 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.688483 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.688564 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.688728 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:46.188701116 +0000 UTC m=+27.295668463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.688738 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.688865 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677257 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.689097 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.689186 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.689241 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:46.189225849 +0000 UTC m=+27.296193186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.689343 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.689470 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.689717 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.689854 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.689853 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.690022 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.677523 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.690775 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691212 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.691357 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:46:46.191341352 +0000 UTC m=+27.298308679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691394 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691708 4796 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691729 4796 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691739 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691748 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691758 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691791 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691801 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691810 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691819 4796 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691828 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691838 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691847 4796 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.692075 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.692087 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.692096 4796 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.692105 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.692113 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693004 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693018 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693028 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693037 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693059 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693089 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693098 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693108 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693117 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693126 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693137 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693162 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693171 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693182 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693191 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693200 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693210 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693219 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693243 4796 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693253 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693261 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693276 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693285 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693316 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693327 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693337 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693346 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693355 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693365 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693375 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693399 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693408 4796 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693418 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693428 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693438 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693447 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693474 4796 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693483 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693493 4796 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693503 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693513 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693522 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.691853 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.692759 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.692918 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.693933 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.694311 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.694435 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.694629 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.694960 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.695332 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.695512 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.696407 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.697180 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.697309 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.697948 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.698061 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.698303 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.698646 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.698739 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.701756 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.704115 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.704205 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.704223 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.704316 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.704406 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:46.204383675 +0000 UTC m=+27.311351002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.704432 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.704818 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.705285 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.705113 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.705255 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.705610 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.706751 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.707178 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.707294 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.707386 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.707714 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.707942 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.708724 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.709404 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.709588 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.709615 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.709600 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.709629 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.709680 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.709592 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.710083 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: E0127 06:46:45.710173 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:46.210140321 +0000 UTC m=+27.317107788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.710204 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.710414 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.710527 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.711110 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.711155 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.711805 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.711900 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.712967 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.713232 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.715062 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.715851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.718613 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.718888 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.718959 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.718984 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.719498 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.719642 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.719832 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.720159 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.720297 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.721302 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.725081 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.732717 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.735387 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.738331 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.741489 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.756480 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.761347 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.762119 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.776915 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.788961 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794371 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794433 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794444 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794455 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794466 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794477 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794486 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794494 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794504 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794513 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794522 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794552 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794561 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794569 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794578 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794586 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794595 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794605 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794616 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794638 4796 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794648 4796 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794658 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794666 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794675 4796 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794684 4796 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794693 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794702 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794710 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794719 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794728 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794736 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794744 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794753 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794762 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794770 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794779 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794787 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794796 4796 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794806 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794814 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794822 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794830 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794839 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794847 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794856 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794864 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794873 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794882 4796 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794891 4796 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794899 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794908 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794916 4796 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794927 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794936 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794945 4796 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794953 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794961 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794968 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794976 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794985 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.794992 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795000 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795008 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795018 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795025 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795033 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795041 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795048 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795056 4796 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795067 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795075 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795086 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795094 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795103 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795112 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795120 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795128 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795138 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795146 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795154 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795162 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795169 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795177 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795185 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795193 4796 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795200 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795209 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795217 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795225 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795232 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795241 4796 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795249 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795257 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795266 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795275 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795283 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795291 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795299 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795307 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795315 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795323 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795330 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795338 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795347 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795355 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795364 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795372 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795380 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795388 4796 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795397 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795406 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795413 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795421 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795429 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795437 4796 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795445 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795453 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795461 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795470 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795477 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795486 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795495 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795504 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795511 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795519 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795527 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795548 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795556 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795564 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795572 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795582 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795590 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795597 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795605 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795613 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.795620 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.796109 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.796145 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.946391 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.955413 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 27 06:46:45 crc kubenswrapper[4796]: W0127 06:46:45.964039 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-1be4bbaad66d9d84d970d4bafa106aea46b8fb3269ebbbf192cf51a10ea56027 WatchSource:0}: Error finding container 1be4bbaad66d9d84d970d4bafa106aea46b8fb3269ebbbf192cf51a10ea56027: Status 404 returned error can't find the container with id 1be4bbaad66d9d84d970d4bafa106aea46b8fb3269ebbbf192cf51a10ea56027 Jan 27 06:46:45 crc kubenswrapper[4796]: I0127 06:46:45.965888 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 27 06:46:45 crc kubenswrapper[4796]: W0127 06:46:45.997092 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-494cb1d1fee78ccb704359a28e7e09bc549b43ce2ecd586b64773307aabb9551 WatchSource:0}: Error finding container 494cb1d1fee78ccb704359a28e7e09bc549b43ce2ecd586b64773307aabb9551: Status 404 returned error can't find the container with id 494cb1d1fee78ccb704359a28e7e09bc549b43ce2ecd586b64773307aabb9551 Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.198717 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.198821 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.198855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.198935 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.198937 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:46:47.198905361 +0000 UTC m=+28.305872688 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.199051 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:47.199043284 +0000 UTC m=+28.306010611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.199076 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.199227 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:47.199186848 +0000 UTC m=+28.306154355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.300154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.300211 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.300319 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.300333 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.300343 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.300388 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:47.300375165 +0000 UTC m=+28.407342492 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.300434 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.300442 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.300449 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.300467 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:47.300461337 +0000 UTC m=+28.407428664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.313000 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-27 06:41:45 +0000 UTC, rotation deadline is 2026-11-07 12:07:27.042794955 +0000 UTC Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.313067 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6821h20m40.729729617s for next certificate rotation Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.680569 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:32:40.66646361 +0000 UTC Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.746239 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:46 crc kubenswrapper[4796]: E0127 06:46:46.746376 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.750611 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.751256 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.752618 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.753282 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.754689 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.755294 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.756367 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.757651 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.758447 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.759690 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.760310 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.761664 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.762167 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.762770 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.764028 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.764713 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.765874 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.766385 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.767160 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.768365 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.769151 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.770342 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.770879 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.772121 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.772639 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.773336 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.774680 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.775223 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.776630 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.777135 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.777658 4796 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.777765 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.779139 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.779689 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.780179 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.781434 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.782097 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.782641 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.783250 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.783951 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.784403 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.785006 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.788334 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.789009 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.789895 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.790407 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.791250 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.791956 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.792790 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.793231 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.794247 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.794783 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.795334 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.796135 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.915425 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.916000 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.917766 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c" exitCode=255 Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.917817 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c"} Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.917912 4796 scope.go:117] "RemoveContainer" containerID="18e4e34dfeb515f221d9d4f666df865797a9ffc2d5bd6a4c1ce75c52e07be5a8" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.918814 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"494cb1d1fee78ccb704359a28e7e09bc549b43ce2ecd586b64773307aabb9551"} Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.920955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36"} Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.920984 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab"} Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.921000 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c5d6501d01a06dbd0c76bea12d7a1130ca717b3625eb176a5f68104f52003c7c"} Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.925694 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd"} Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.925820 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1be4bbaad66d9d84d970d4bafa106aea46b8fb3269ebbbf192cf51a10ea56027"} Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.934430 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.964061 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.984166 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:46 crc kubenswrapper[4796]: I0127 06:46:46.995520 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.005963 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.019463 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.024661 4796 scope.go:117] "RemoveContainer" containerID="46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c" Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.024918 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.030336 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.030577 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.041902 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.055392 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.071633 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.084090 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.098313 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.208262 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.208335 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.208377 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.208454 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:46:49.208427225 +0000 UTC m=+30.315394552 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.208498 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.208514 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.208574 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:49.208559468 +0000 UTC m=+30.315526795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.208602 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:49.208587429 +0000 UTC m=+30.315554956 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.308960 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.309030 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.309169 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.309181 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.309225 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.309238 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.309293 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:49.309276144 +0000 UTC m=+30.416243471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.309189 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.309332 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.309425 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:49.309407777 +0000 UTC m=+30.416375264 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.629321 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qfqgm"] Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.629782 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9j4qm"] Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.630179 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-zhtz2"] Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.630403 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zhtz2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.630790 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.631146 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.632959 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-46ql2"] Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.633136 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.633169 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.633758 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.633761 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.634614 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.634851 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.634864 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.635255 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.635318 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.635366 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.635605 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.636160 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.636371 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.636685 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.636824 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.637281 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.667277 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18e4e34dfeb515f221d9d4f666df865797a9ffc2d5bd6a4c1ce75c52e07be5a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:29Z\\\",\\\"message\\\":\\\"W0127 06:46:29.402267 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 06:46:29.402596 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769496389 cert, and key in /tmp/serving-cert-259868119/serving-signer.crt, /tmp/serving-cert-259868119/serving-signer.key\\\\nI0127 06:46:29.641037 1 observer_polling.go:159] Starting file observer\\\\nW0127 06:46:29.658768 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 06:46:29.658920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:29.659738 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-259868119/tls.crt::/tmp/serving-cert-259868119/tls.key\\\\\\\"\\\\nF0127 06:46:29.905211 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.680630 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.680788 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:15:56.866214487 +0000 UTC Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.692081 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.703318 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713279 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84d7512b-555d-440a-b817-deb8ba12f61d-proxy-tls\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713307 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-hostroot\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713369 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-var-lib-kubelet\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713445 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-daemon-config\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713483 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-conf-dir\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713504 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-run-multus-certs\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713551 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-cnibin\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713590 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-run-k8s-cni-cncf-io\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713725 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djgx\" (UniqueName: \"kubernetes.io/projected/84d7512b-555d-440a-b817-deb8ba12f61d-kube-api-access-7djgx\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3555bc2-e335-4479-8b6f-8b5970b27a25-cni-binary-copy\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713873 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-var-lib-cni-multus\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713905 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-os-release\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/523d7c54-e525-4fef-8de8-b3bff6b70d8e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.713966 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/523d7c54-e525-4fef-8de8-b3bff6b70d8e-cni-binary-copy\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714014 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6czc\" (UniqueName: \"kubernetes.io/projected/523d7c54-e525-4fef-8de8-b3bff6b70d8e-kube-api-access-g6czc\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714099 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-cni-dir\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714182 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-var-lib-cni-bin\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714216 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-system-cni-dir\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714244 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714287 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-os-release\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714321 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ckp\" (UniqueName: \"kubernetes.io/projected/b3555bc2-e335-4479-8b6f-8b5970b27a25-kube-api-access-h7ckp\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714347 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/61a29cf2-64f3-4655-a2fa-06b269c644ee-hosts-file\") pod \"node-resolver-zhtz2\" (UID: \"61a29cf2-64f3-4655-a2fa-06b269c644ee\") " pod="openshift-dns/node-resolver-zhtz2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714392 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84d7512b-555d-440a-b817-deb8ba12f61d-mcd-auth-proxy-config\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714423 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-run-netns\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714478 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdhzj\" (UniqueName: \"kubernetes.io/projected/61a29cf2-64f3-4655-a2fa-06b269c644ee-kube-api-access-kdhzj\") pod \"node-resolver-zhtz2\" (UID: \"61a29cf2-64f3-4655-a2fa-06b269c644ee\") " pod="openshift-dns/node-resolver-zhtz2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714496 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/84d7512b-555d-440a-b817-deb8ba12f61d-rootfs\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714516 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-system-cni-dir\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714549 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-socket-dir-parent\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714601 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-etc-kubernetes\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.714623 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-cnibin\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.716266 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.727167 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.737493 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.746354 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.746355 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.746498 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.746614 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.748653 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.761181 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.777014 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.790206 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.808292 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815373 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-cnibin\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815427 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-run-k8s-cni-cncf-io\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815453 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-os-release\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/523d7c54-e525-4fef-8de8-b3bff6b70d8e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815511 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7djgx\" (UniqueName: \"kubernetes.io/projected/84d7512b-555d-440a-b817-deb8ba12f61d-kube-api-access-7djgx\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815552 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3555bc2-e335-4479-8b6f-8b5970b27a25-cni-binary-copy\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815580 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-var-lib-cni-multus\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815528 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-cnibin\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815607 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6czc\" (UniqueName: \"kubernetes.io/projected/523d7c54-e525-4fef-8de8-b3bff6b70d8e-kube-api-access-g6czc\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/523d7c54-e525-4fef-8de8-b3bff6b70d8e-cni-binary-copy\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815764 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-cni-dir\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815807 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-var-lib-cni-bin\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-run-k8s-cni-cncf-io\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815887 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-var-lib-cni-multus\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815932 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-var-lib-cni-bin\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.817050 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-cni-dir\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.817880 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/523d7c54-e525-4fef-8de8-b3bff6b70d8e-cni-binary-copy\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818231 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-os-release\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.815847 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-system-cni-dir\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818340 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818493 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-os-release\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818572 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7ckp\" (UniqueName: \"kubernetes.io/projected/b3555bc2-e335-4479-8b6f-8b5970b27a25-kube-api-access-h7ckp\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818616 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/61a29cf2-64f3-4655-a2fa-06b269c644ee-hosts-file\") pod \"node-resolver-zhtz2\" (UID: \"61a29cf2-64f3-4655-a2fa-06b269c644ee\") " pod="openshift-dns/node-resolver-zhtz2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818647 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-os-release\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818606 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-system-cni-dir\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84d7512b-555d-440a-b817-deb8ba12f61d-mcd-auth-proxy-config\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818741 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/61a29cf2-64f3-4655-a2fa-06b269c644ee-hosts-file\") pod \"node-resolver-zhtz2\" (UID: \"61a29cf2-64f3-4655-a2fa-06b269c644ee\") " pod="openshift-dns/node-resolver-zhtz2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818755 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-run-netns\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818805 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-run-netns\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdhzj\" (UniqueName: \"kubernetes.io/projected/61a29cf2-64f3-4655-a2fa-06b269c644ee-kube-api-access-kdhzj\") pod \"node-resolver-zhtz2\" (UID: \"61a29cf2-64f3-4655-a2fa-06b269c644ee\") " pod="openshift-dns/node-resolver-zhtz2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818884 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-cnibin\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818944 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/84d7512b-555d-440a-b817-deb8ba12f61d-rootfs\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.818978 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-system-cni-dir\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819013 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-socket-dir-parent\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819042 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-etc-kubernetes\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819060 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/84d7512b-555d-440a-b817-deb8ba12f61d-rootfs\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819101 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-etc-kubernetes\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-cnibin\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819161 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-system-cni-dir\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819153 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84d7512b-555d-440a-b817-deb8ba12f61d-proxy-tls\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819216 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-hostroot\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819248 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-daemon-config\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819260 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-hostroot\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819214 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-socket-dir-parent\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-var-lib-kubelet\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819332 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-conf-dir\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819339 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/84d7512b-555d-440a-b817-deb8ba12f61d-mcd-auth-proxy-config\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819363 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-run-multus-certs\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.819444 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-var-lib-kubelet\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.820161 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-host-run-multus-certs\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.820161 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/523d7c54-e525-4fef-8de8-b3bff6b70d8e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.820195 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-conf-dir\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.820771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3555bc2-e335-4479-8b6f-8b5970b27a25-multus-daemon-config\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.821019 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/523d7c54-e525-4fef-8de8-b3bff6b70d8e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.821414 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3555bc2-e335-4479-8b6f-8b5970b27a25-cni-binary-copy\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.826690 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/84d7512b-555d-440a-b817-deb8ba12f61d-proxy-tls\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.833484 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.839167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdhzj\" (UniqueName: \"kubernetes.io/projected/61a29cf2-64f3-4655-a2fa-06b269c644ee-kube-api-access-kdhzj\") pod \"node-resolver-zhtz2\" (UID: \"61a29cf2-64f3-4655-a2fa-06b269c644ee\") " pod="openshift-dns/node-resolver-zhtz2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.840178 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6czc\" (UniqueName: \"kubernetes.io/projected/523d7c54-e525-4fef-8de8-b3bff6b70d8e-kube-api-access-g6czc\") pod \"multus-additional-cni-plugins-9j4qm\" (UID: \"523d7c54-e525-4fef-8de8-b3bff6b70d8e\") " pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.843387 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7djgx\" (UniqueName: \"kubernetes.io/projected/84d7512b-555d-440a-b817-deb8ba12f61d-kube-api-access-7djgx\") pod \"machine-config-daemon-qfqgm\" (UID: \"84d7512b-555d-440a-b817-deb8ba12f61d\") " pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.850056 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.850118 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7ckp\" (UniqueName: \"kubernetes.io/projected/b3555bc2-e335-4479-8b6f-8b5970b27a25-kube-api-access-h7ckp\") pod \"multus-46ql2\" (UID: \"b3555bc2-e335-4479-8b6f-8b5970b27a25\") " pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.852434 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18e4e34dfeb515f221d9d4f666df865797a9ffc2d5bd6a4c1ce75c52e07be5a8\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:29Z\\\",\\\"message\\\":\\\"W0127 06:46:29.402267 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0127 06:46:29.402596 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769496389 cert, and key in /tmp/serving-cert-259868119/serving-signer.crt, /tmp/serving-cert-259868119/serving-signer.key\\\\nI0127 06:46:29.641037 1 observer_polling.go:159] Starting file observer\\\\nW0127 06:46:29.658768 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0127 06:46:29.658920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:29.659738 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-259868119/tls.crt::/tmp/serving-cert-259868119/tls.key\\\\\\\"\\\\nF0127 06:46:29.905211 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.866517 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.866705 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.881338 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.896890 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.911328 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.924216 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.930551 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.932424 4796 scope.go:117] "RemoveContainer" containerID="46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c" Jan 27 06:46:47 crc kubenswrapper[4796]: E0127 06:46:47.932642 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.937659 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.945084 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zhtz2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.952657 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.953344 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.958639 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.963161 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-46ql2" Jan 27 06:46:47 crc kubenswrapper[4796]: I0127 06:46:47.968396 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:47Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:47 crc kubenswrapper[4796]: W0127 06:46:47.986688 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84d7512b_555d_440a_b817_deb8ba12f61d.slice/crio-b1d1114118200ece302f7dbc2f674fbf4ea9ed3b6ae31346a398189744c5e385 WatchSource:0}: Error finding container b1d1114118200ece302f7dbc2f674fbf4ea9ed3b6ae31346a398189744c5e385: Status 404 returned error can't find the container with id b1d1114118200ece302f7dbc2f674fbf4ea9ed3b6ae31346a398189744c5e385 Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.017321 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.025856 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xqmc4"] Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.026828 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.033157 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.033457 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.033722 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.033820 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.033939 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.034036 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.034143 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.071413 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.104355 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.115989 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.121781 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-etc-openvswitch\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.121823 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1fb58d6-d9a4-4095-be46-a544216963f7-ovn-node-metrics-cert\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.121869 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.121893 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-systemd-units\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.121912 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-config\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.121931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fskkf\" (UniqueName: \"kubernetes.io/projected/a1fb58d6-d9a4-4095-be46-a544216963f7-kube-api-access-fskkf\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.121954 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-node-log\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.121975 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-systemd\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122003 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-bin\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122023 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-log-socket\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122044 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-netns\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122064 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-var-lib-openvswitch\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122091 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-ovn\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122106 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-ovn-kubernetes\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122124 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-script-lib\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122139 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-slash\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122156 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-openvswitch\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122176 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-netd\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122192 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-kubelet\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.122207 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-env-overrides\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.132151 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.152039 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.163776 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.185365 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.207467 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223477 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-netd\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223514 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-kubelet\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223552 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-env-overrides\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223582 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-etc-openvswitch\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223603 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1fb58d6-d9a4-4095-be46-a544216963f7-ovn-node-metrics-cert\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223634 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-systemd-units\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223681 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-node-log\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223686 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-netd\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223719 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-node-log\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223741 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223754 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-systemd-units\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223744 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-etc-openvswitch\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223786 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-config\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223805 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-kubelet\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.223828 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fskkf\" (UniqueName: \"kubernetes.io/projected/a1fb58d6-d9a4-4095-be46-a544216963f7-kube-api-access-fskkf\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224412 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-env-overrides\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224546 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-config\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224606 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-systemd\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224648 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-log-socket\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224671 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-bin\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224693 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-var-lib-openvswitch\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224727 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-netns\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224746 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-script-lib\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224769 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-ovn\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224789 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-ovn-kubernetes\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224813 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-slash\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224834 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-openvswitch\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224848 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-var-lib-openvswitch\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224884 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-systemd\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-openvswitch\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224914 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-log-socket\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224924 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-netns\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224965 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-ovn\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.225000 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-ovn-kubernetes\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.225031 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-slash\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.224939 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-bin\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.225428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-script-lib\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.226885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1fb58d6-d9a4-4095-be46-a544216963f7-ovn-node-metrics-cert\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.227697 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.245168 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fskkf\" (UniqueName: \"kubernetes.io/projected/a1fb58d6-d9a4-4095-be46-a544216963f7-kube-api-access-fskkf\") pod \"ovnkube-node-xqmc4\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.255731 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.276581 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.291203 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.303275 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.317992 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.330083 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.345000 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.357084 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.359042 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: W0127 06:46:48.370646 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1fb58d6_d9a4_4095_be46_a544216963f7.slice/crio-860bd593a4ca13f292a4b7fe7cf8790d6b688898c3d267ad92164a7c15fff046 WatchSource:0}: Error finding container 860bd593a4ca13f292a4b7fe7cf8790d6b688898c3d267ad92164a7c15fff046: Status 404 returned error can't find the container with id 860bd593a4ca13f292a4b7fe7cf8790d6b688898c3d267ad92164a7c15fff046 Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.378306 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.395424 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.417392 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.431235 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.445603 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.465657 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.477732 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.681444 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:41:38.588312897 +0000 UTC Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.746952 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:48 crc kubenswrapper[4796]: E0127 06:46:48.747109 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.935803 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zhtz2" event={"ID":"61a29cf2-64f3-4655-a2fa-06b269c644ee","Type":"ContainerStarted","Data":"372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.935884 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zhtz2" event={"ID":"61a29cf2-64f3-4655-a2fa-06b269c644ee","Type":"ContainerStarted","Data":"b97451f0445d5ad75739bdd97f53e591230342a3719f8532eeaf6c1ac0de233c"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.937136 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.938414 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d" exitCode=0 Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.938480 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.938498 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"860bd593a4ca13f292a4b7fe7cf8790d6b688898c3d267ad92164a7c15fff046"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.939575 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46ql2" event={"ID":"b3555bc2-e335-4479-8b6f-8b5970b27a25","Type":"ContainerStarted","Data":"ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.939636 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46ql2" event={"ID":"b3555bc2-e335-4479-8b6f-8b5970b27a25","Type":"ContainerStarted","Data":"6048a512eee7d771140fcc853ec77ed2bf4aeab0afa5ac972f139d063d820edb"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.941126 4796 generic.go:334] "Generic (PLEG): container finished" podID="523d7c54-e525-4fef-8de8-b3bff6b70d8e" containerID="50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb" exitCode=0 Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.941214 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerDied","Data":"50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.941247 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerStarted","Data":"f74b0e5baf88ebdd0a6041293ef0bace29afa10b7a8d936619fedeb04e03c09d"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.942879 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.942920 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.942939 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"b1d1114118200ece302f7dbc2f674fbf4ea9ed3b6ae31346a398189744c5e385"} Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.952712 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.966890 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.980788 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:48 crc kubenswrapper[4796]: I0127 06:46:48.992603 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:48Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.006716 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.023085 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.036310 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.051762 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.071655 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.087049 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.102220 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.122528 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.135873 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.146199 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kx5rc"] Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.146595 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.148510 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.148570 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.148513 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.149570 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.152207 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.175101 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.188337 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.227090 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.234513 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.234717 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:46:53.234684096 +0000 UTC m=+34.341651433 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.234781 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.234837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6932c20-41d2-487b-90b4-1e3c96cb17fe-host\") pod \"node-ca-kx5rc\" (UID: \"c6932c20-41d2-487b-90b4-1e3c96cb17fe\") " pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.234857 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.234865 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnlxh\" (UniqueName: \"kubernetes.io/projected/c6932c20-41d2-487b-90b4-1e3c96cb17fe-kube-api-access-cnlxh\") pod \"node-ca-kx5rc\" (UID: \"c6932c20-41d2-487b-90b4-1e3c96cb17fe\") " pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.234920 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:53.234902952 +0000 UTC m=+34.341870279 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.235033 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.235129 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c6932c20-41d2-487b-90b4-1e3c96cb17fe-serviceca\") pod \"node-ca-kx5rc\" (UID: \"c6932c20-41d2-487b-90b4-1e3c96cb17fe\") " pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.235197 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.235264 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:53.235246271 +0000 UTC m=+34.342213608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.274285 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.287137 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.297000 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.311781 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.324591 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.336190 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c6932c20-41d2-487b-90b4-1e3c96cb17fe-serviceca\") pod \"node-ca-kx5rc\" (UID: \"c6932c20-41d2-487b-90b4-1e3c96cb17fe\") " pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.336240 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnlxh\" (UniqueName: \"kubernetes.io/projected/c6932c20-41d2-487b-90b4-1e3c96cb17fe-kube-api-access-cnlxh\") pod \"node-ca-kx5rc\" (UID: \"c6932c20-41d2-487b-90b4-1e3c96cb17fe\") " pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.336263 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.336283 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6932c20-41d2-487b-90b4-1e3c96cb17fe-host\") pod \"node-ca-kx5rc\" (UID: \"c6932c20-41d2-487b-90b4-1e3c96cb17fe\") " pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.336308 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.336451 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.336468 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.336480 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.336496 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.336548 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.336561 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.336521 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:53.33650607 +0000 UTC m=+34.443473397 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.336633 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:46:53.336615673 +0000 UTC m=+34.443583000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.336708 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c6932c20-41d2-487b-90b4-1e3c96cb17fe-host\") pod \"node-ca-kx5rc\" (UID: \"c6932c20-41d2-487b-90b4-1e3c96cb17fe\") " pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.337405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c6932c20-41d2-487b-90b4-1e3c96cb17fe-serviceca\") pod \"node-ca-kx5rc\" (UID: \"c6932c20-41d2-487b-90b4-1e3c96cb17fe\") " pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.339344 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.351899 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.356612 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnlxh\" (UniqueName: \"kubernetes.io/projected/c6932c20-41d2-487b-90b4-1e3c96cb17fe-kube-api-access-cnlxh\") pod \"node-ca-kx5rc\" (UID: \"c6932c20-41d2-487b-90b4-1e3c96cb17fe\") " pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.372714 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.386659 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.401597 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.410418 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.422811 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.437871 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.451439 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.457611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kx5rc" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.469904 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: W0127 06:46:49.471812 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6932c20_41d2_487b_90b4_1e3c96cb17fe.slice/crio-c8b416ee772249318f6755a27714d4d4a520819daee3618577a969b2e96a126c WatchSource:0}: Error finding container c8b416ee772249318f6755a27714d4d4a520819daee3618577a969b2e96a126c: Status 404 returned error can't find the container with id c8b416ee772249318f6755a27714d4d4a520819daee3618577a969b2e96a126c Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.488907 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.506393 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.542448 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.582276 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.627138 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.662703 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.682672 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:43:18.817457433 +0000 UTC Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.701734 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.744906 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.746233 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.746265 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.746435 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:46:49 crc kubenswrapper[4796]: E0127 06:46:49.746615 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.952450 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kx5rc" event={"ID":"c6932c20-41d2-487b-90b4-1e3c96cb17fe","Type":"ContainerStarted","Data":"52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183"} Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.952518 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kx5rc" event={"ID":"c6932c20-41d2-487b-90b4-1e3c96cb17fe","Type":"ContainerStarted","Data":"c8b416ee772249318f6755a27714d4d4a520819daee3618577a969b2e96a126c"} Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.956116 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134"} Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.956164 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5"} Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.956179 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c"} Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.956188 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46"} Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.959232 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerStarted","Data":"23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5"} Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.973193 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:49 crc kubenswrapper[4796]: I0127 06:46:49.987211 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:49Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.005150 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.017738 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.031333 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.050661 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.064405 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.080468 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.110238 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.140607 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.180909 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.223583 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.269869 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.299941 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.307005 4796 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.683728 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 20:14:17.620029848 +0000 UTC Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.746603 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:50 crc kubenswrapper[4796]: E0127 06:46:50.746863 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.759032 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.771069 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.784166 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.795377 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.805320 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.819657 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.834514 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.848229 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.861928 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.891040 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.930322 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.952416 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.965276 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6"} Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.965330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed"} Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.967450 4796 generic.go:334] "Generic (PLEG): container finished" podID="523d7c54-e525-4fef-8de8-b3bff6b70d8e" containerID="23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5" exitCode=0 Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.967476 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerDied","Data":"23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5"} Jan 27 06:46:50 crc kubenswrapper[4796]: I0127 06:46:50.987587 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.003773 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.017002 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.042884 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.060277 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.080425 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.095562 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.108472 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.146493 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.181640 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.220385 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.261407 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.299233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.344077 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.381333 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.408246 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.411778 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.424911 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.442424 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.483357 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.524778 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.564065 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.603513 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.646568 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.684204 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 04:46:50.082287767 +0000 UTC Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.684950 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.723102 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.746714 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.746715 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:51 crc kubenswrapper[4796]: E0127 06:46:51.746888 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:46:51 crc kubenswrapper[4796]: E0127 06:46:51.747044 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.773756 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.807658 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.844362 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.886023 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.927325 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.963998 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.975500 4796 generic.go:334] "Generic (PLEG): container finished" podID="523d7c54-e525-4fef-8de8-b3bff6b70d8e" containerID="54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035" exitCode=0 Jan 27 06:46:51 crc kubenswrapper[4796]: I0127 06:46:51.975577 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerDied","Data":"54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.007089 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.042715 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.087824 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.125783 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.160453 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.202124 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.242759 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.261347 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.263912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.264024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.264088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.264284 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.300468 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.314347 4796 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.314658 4796 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.315716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.315817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.315898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.316016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.316117 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: E0127 06:46:52.340935 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.344270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.344296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.344304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.344335 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.344345 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: E0127 06:46:52.355652 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.359818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.359855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.359866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.359881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.359901 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.362170 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: E0127 06:46:52.372557 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.376898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.376935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.376952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.376971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.376985 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: E0127 06:46:52.389477 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.393922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.393955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.393964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.393978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.393989 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.402235 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: E0127 06:46:52.405117 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: E0127 06:46:52.405237 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.406768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.406811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.406824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.406839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.406850 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.440809 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.482073 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.514554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.514598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.514609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.514625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.514635 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.523584 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.559069 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.602008 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.616632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.616675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.616687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.616705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.616717 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.651617 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:52Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.685257 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:27:59.632302276 +0000 UTC Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.718171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.718205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.718216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.718229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.718238 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.746478 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:52 crc kubenswrapper[4796]: E0127 06:46:52.746738 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.821003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.821072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.821096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.821140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.821163 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.923826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.923870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.923884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.923903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.923916 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:52Z","lastTransitionTime":"2026-01-27T06:46:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.982150 4796 generic.go:334] "Generic (PLEG): container finished" podID="523d7c54-e525-4fef-8de8-b3bff6b70d8e" containerID="73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b" exitCode=0 Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.982259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerDied","Data":"73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b"} Jan 27 06:46:52 crc kubenswrapper[4796]: I0127 06:46:52.988771 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.007662 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.024198 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.026167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.026234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.026249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.026273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.026313 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.041438 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.055140 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.072481 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.090269 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.104398 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.117479 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.129515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.129562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.129572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.129586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.129597 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.134324 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.149507 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.167882 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.178572 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.191994 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.203251 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.232126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.232162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.232170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.232184 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.232194 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.247715 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.281086 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.281230 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.281248 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:47:01.281220248 +0000 UTC m=+42.388187575 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.281274 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.281340 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.281354 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.281386 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:01.281375421 +0000 UTC m=+42.388342808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.281408 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:01.281400142 +0000 UTC m=+42.388367469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.335249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.335284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.335293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.335308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.335318 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.382401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.382756 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.382590 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.382802 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.382827 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.382844 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.382859 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.382869 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.382906 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:01.382882137 +0000 UTC m=+42.489849474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.382933 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:01.382922378 +0000 UTC m=+42.489889715 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.437862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.437905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.437919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.437942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.437958 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.541119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.541180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.541200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.541228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.541251 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.644624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.644701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.644712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.644732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.644745 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.686459 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:11:31.636090987 +0000 UTC Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.746265 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.746828 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.746840 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.747019 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.751560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.751625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.751643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.751666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.751684 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.854125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.854639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.854672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.854701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.854726 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.867399 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.868385 4796 scope.go:117] "RemoveContainer" containerID="46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c" Jan 27 06:46:53 crc kubenswrapper[4796]: E0127 06:46:53.868659 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.958582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.958628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.958640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.958659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.958672 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:53Z","lastTransitionTime":"2026-01-27T06:46:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:53 crc kubenswrapper[4796]: I0127 06:46:53.997631 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerStarted","Data":"2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.010960 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.023858 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.034665 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.044912 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.055102 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.060175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.060218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.060229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.060244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.060253 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.073825 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.084350 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.096308 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.107272 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.117202 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.133706 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.143277 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.155478 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.162305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.162344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.162354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.162368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.162377 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.167443 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.188664 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.265150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.265205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.265225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.265248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.265265 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.367884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.367921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.367931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.367945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.367956 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.470772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.470827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.470840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.470859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.470871 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.574108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.574167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.574183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.574206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.574218 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.677458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.677593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.677621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.677649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.677671 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.687348 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 21:32:30.204041673 +0000 UTC Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.746250 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:54 crc kubenswrapper[4796]: E0127 06:46:54.746376 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.780279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.780321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.780331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.780347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.780359 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.883468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.883503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.883511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.883526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.883554 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.994779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.994825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.994836 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.994850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:54 crc kubenswrapper[4796]: I0127 06:46:54.994871 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:54Z","lastTransitionTime":"2026-01-27T06:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.002599 4796 generic.go:334] "Generic (PLEG): container finished" podID="523d7c54-e525-4fef-8de8-b3bff6b70d8e" containerID="2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f" exitCode=0 Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.002655 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerDied","Data":"2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.007835 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.008080 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.008108 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.016020 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.048886 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.051126 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.053234 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.062947 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.083017 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.092766 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.096594 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.096622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.096635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.096651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.096661 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:55Z","lastTransitionTime":"2026-01-27T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.108485 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.122696 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.133293 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.151069 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.162917 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.174375 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.197922 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.199755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.199799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.199812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.199839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.199854 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:55Z","lastTransitionTime":"2026-01-27T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.220333 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.247600 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.262528 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.276247 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.288277 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.300034 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.302172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.302207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.302218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.302234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.302246 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:55Z","lastTransitionTime":"2026-01-27T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.328907 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.352452 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.373707 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.387443 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.403766 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.404714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.404752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.404763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.404779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.404792 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:55Z","lastTransitionTime":"2026-01-27T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.418303 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.440898 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.455254 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.471142 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.486898 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.499722 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.507861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.507909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.507922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.507941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.507952 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:55Z","lastTransitionTime":"2026-01-27T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.515852 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:55Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.610698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.610771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.610788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.610813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.610828 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:55Z","lastTransitionTime":"2026-01-27T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.687817 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:47:56.818808294 +0000 UTC Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.713109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.713152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.713169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.713188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.713200 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:55Z","lastTransitionTime":"2026-01-27T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.746581 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.746610 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:55 crc kubenswrapper[4796]: E0127 06:46:55.746767 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:46:55 crc kubenswrapper[4796]: E0127 06:46:55.746867 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.816168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.816244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.816264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.816292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.816309 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:55Z","lastTransitionTime":"2026-01-27T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.919070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.919118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.919127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.919149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:55 crc kubenswrapper[4796]: I0127 06:46:55.919159 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:55Z","lastTransitionTime":"2026-01-27T06:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.014012 4796 generic.go:334] "Generic (PLEG): container finished" podID="523d7c54-e525-4fef-8de8-b3bff6b70d8e" containerID="b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc" exitCode=0 Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.014158 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.014858 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerDied","Data":"b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.021877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.021920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.021932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.021949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.021961 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:56Z","lastTransitionTime":"2026-01-27T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.123787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.123826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.123835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.123849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.123859 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:56Z","lastTransitionTime":"2026-01-27T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.225862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.225901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.225913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.225929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.225941 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:56Z","lastTransitionTime":"2026-01-27T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.328545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.328576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.328584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.328597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.328605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:56Z","lastTransitionTime":"2026-01-27T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.430183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.430228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.430239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.430253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.430263 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:56Z","lastTransitionTime":"2026-01-27T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.532884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.532916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.532927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.532941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.532949 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:56Z","lastTransitionTime":"2026-01-27T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.634603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.634631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.634639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.634650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.634658 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:56Z","lastTransitionTime":"2026-01-27T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.689601 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:58:55.088480278 +0000 UTC Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.737732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.737763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.737775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.737790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.737803 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:56Z","lastTransitionTime":"2026-01-27T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.864568 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:56 crc kubenswrapper[4796]: E0127 06:46:56.864721 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.864576 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:56 crc kubenswrapper[4796]: E0127 06:46:56.864918 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.897148 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.917237 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.935043 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.953411 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.969313 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.973135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.973185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.973197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.973216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.973230 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:56Z","lastTransitionTime":"2026-01-27T06:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.986526 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:56 crc kubenswrapper[4796]: I0127 06:46:56.998012 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:56Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.009377 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.018787 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.024485 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.038351 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.049802 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.069586 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.078749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.078779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.078788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.078802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.078812 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:57Z","lastTransitionTime":"2026-01-27T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.089477 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.105454 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.121402 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:57Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.181210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.181254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.181270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.181290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.181302 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:57Z","lastTransitionTime":"2026-01-27T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.285263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.285773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.285788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.285814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.285829 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:57Z","lastTransitionTime":"2026-01-27T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.388994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.389062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.389088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.389117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.389138 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:57Z","lastTransitionTime":"2026-01-27T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.491693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.491744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.491770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.491798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.491819 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:57Z","lastTransitionTime":"2026-01-27T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.594580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.594631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.594663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.594681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.594695 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:57Z","lastTransitionTime":"2026-01-27T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.690269 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:17:20.645280475 +0000 UTC Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.697336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.697420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.697440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.697466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.697482 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:57Z","lastTransitionTime":"2026-01-27T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.746715 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:57 crc kubenswrapper[4796]: E0127 06:46:57.746924 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.799859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.799899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.799916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.799931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.799942 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:57Z","lastTransitionTime":"2026-01-27T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.903266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.903298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.903306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.903318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:57 crc kubenswrapper[4796]: I0127 06:46:57.903327 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:57Z","lastTransitionTime":"2026-01-27T06:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.006339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.006402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.006419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.006446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.006473 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.030322 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" event={"ID":"523d7c54-e525-4fef-8de8-b3bff6b70d8e","Type":"ContainerStarted","Data":"3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.057012 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.082325 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.096194 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.108729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.108777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.108791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.108813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.108829 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.112417 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.142890 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.154744 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.174617 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.189901 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.201932 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.210571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.210623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.210642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.210664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.210681 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.216927 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.228187 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.247806 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.258825 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.274565 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.289665 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:46:58Z is after 2025-08-24T17:21:41Z" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.313592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.313628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.313639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.313719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.313734 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.417676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.417729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.417745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.417768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.417782 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.521197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.521263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.521273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.521293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.521308 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.625092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.625168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.625194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.625224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.625242 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.690371 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:53:25.061481153 +0000 UTC Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.728599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.728666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.728690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.728728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.728752 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.746315 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.746336 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:46:58 crc kubenswrapper[4796]: E0127 06:46:58.746525 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:46:58 crc kubenswrapper[4796]: E0127 06:46:58.746717 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.832209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.832251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.832263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.832279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.832291 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.935064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.935111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.935123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.935141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:58 crc kubenswrapper[4796]: I0127 06:46:58.935152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:58Z","lastTransitionTime":"2026-01-27T06:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.038176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.038256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.038277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.038308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.038333 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.141263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.141349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.141381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.141412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.141434 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.244674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.244757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.244781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.244809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.244832 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.347884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.347958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.347982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.348007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.348028 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.452217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.452274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.452292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.452317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.452336 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.554785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.554849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.554872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.554899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.554920 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.657830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.657877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.657889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.657906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.657917 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.699906 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:09:21.574070166 +0000 UTC Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.746528 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:46:59 crc kubenswrapper[4796]: E0127 06:46:59.746663 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.760018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.760048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.760056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.760070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.760078 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.862998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.863054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.863065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.863078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.863087 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.965895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.965970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.965986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.966418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:46:59 crc kubenswrapper[4796]: I0127 06:46:59.966471 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:46:59Z","lastTransitionTime":"2026-01-27T06:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.069243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.069306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.069328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.069360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.069381 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:00Z","lastTransitionTime":"2026-01-27T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.172329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.172364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.172375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.172387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.172397 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:00Z","lastTransitionTime":"2026-01-27T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.275404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.275453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.275466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.275484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.275500 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:00Z","lastTransitionTime":"2026-01-27T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.379881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.379939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.379952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.379974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.379994 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:00Z","lastTransitionTime":"2026-01-27T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.483111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.483182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.483195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.483217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.483232 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:00Z","lastTransitionTime":"2026-01-27T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.586924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.587001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.587018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.587047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.587071 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:00Z","lastTransitionTime":"2026-01-27T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.690419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.690505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.690523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.690594 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.690611 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:00Z","lastTransitionTime":"2026-01-27T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.701042 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 01:16:27.758984326 +0000 UTC Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.718328 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf"] Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.718890 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.723173 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.723430 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.748426 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:00 crc kubenswrapper[4796]: E0127 06:47:00.748646 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.749272 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:00 crc kubenswrapper[4796]: E0127 06:47:00.749397 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.788824 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.793862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.793923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.793946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.793978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.794001 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:00Z","lastTransitionTime":"2026-01-27T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.810752 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.811032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.811139 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.811170 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.811408 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j57jq\" (UniqueName: \"kubernetes.io/projected/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-kube-api-access-j57jq\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.829689 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.852982 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.875076 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.889129 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.896736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.896780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.896792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.896809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.896818 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:00Z","lastTransitionTime":"2026-01-27T06:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.907656 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.913053 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j57jq\" (UniqueName: \"kubernetes.io/projected/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-kube-api-access-j57jq\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.913158 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.913214 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.913248 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.914779 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.922912 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.925571 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.930652 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.933150 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j57jq\" (UniqueName: \"kubernetes.io/projected/f76abf7f-03d4-496f-b7bd-1bc63e0425e6-kube-api-access-j57jq\") pod \"ovnkube-control-plane-749d76644c-gddlf\" (UID: \"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.946156 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.966440 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.982480 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:00 crc kubenswrapper[4796]: I0127 06:47:00.998578 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.000138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.000175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.000188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.000210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.000225 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.013707 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.031639 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.043013 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/0.log" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.047348 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73" exitCode=1 Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.047412 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.048680 4796 scope.go:117] "RemoveContainer" containerID="5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.051218 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.066674 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.086479 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.101302 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.102771 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.102874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.102947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.102966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.102994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.103015 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.119938 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.149019 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"sions/factory.go:141\\\\nI0127 06:47:00.881895 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:00.881909 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:00.881916 6068 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:00.881936 6068 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:00.882681 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:00.882714 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:00.882739 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:00.882759 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:00.882779 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:00.882802 6068 factory.go:656] Stopping watch factory\\\\nI0127 06:47:00.882823 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:00.882879 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:00.882895 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:00.882904 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:00.882913 6068 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.162705 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.180956 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.193100 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.203903 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.207907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.208071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.208179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.208311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.208465 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.222709 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.238659 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.254674 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.270519 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.284299 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.302167 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.310751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.310782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.310792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.310814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.310824 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.315816 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.315895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.315926 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.316005 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.316052 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:17.316036246 +0000 UTC m=+58.423003573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.316283 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:47:17.316274392 +0000 UTC m=+58.423241719 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.316349 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.316372 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:17.316366134 +0000 UTC m=+58.423333451 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.324203 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.345008 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.362390 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.379825 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.396181 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.410199 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.413427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.413485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.413495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.413509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.413518 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.417041 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.417099 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.417194 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.417208 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.417218 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.417261 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:17.417248094 +0000 UTC m=+58.524215421 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.417261 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.417287 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.417298 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.417362 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:17.417345686 +0000 UTC m=+58.524313013 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.424558 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.439382 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.458470 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"sions/factory.go:141\\\\nI0127 06:47:00.881895 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:00.881909 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:00.881916 6068 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:00.881936 6068 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:00.882681 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:00.882714 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:00.882739 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:00.882759 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:00.882779 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:00.882802 6068 factory.go:656] Stopping watch factory\\\\nI0127 06:47:00.882823 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:00.882879 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:00.882895 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:00.882904 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:00.882913 6068 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.469005 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.490406 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.505427 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.516977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.517416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.517520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.517662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.517785 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.520010 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.533700 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.550200 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.566440 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.581323 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.596339 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:01Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.620666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.620718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.620730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.620748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.620760 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.701221 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:05:51.685753284 +0000 UTC Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.723911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.723955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.723970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.723991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.724004 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.747172 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:01 crc kubenswrapper[4796]: E0127 06:47:01.749135 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.826677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.826723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.826734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.826753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.826766 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.930708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.931126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.931222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.931300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:01 crc kubenswrapper[4796]: I0127 06:47:01.931363 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:01Z","lastTransitionTime":"2026-01-27T06:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.034846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.035202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.035383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.035517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.035754 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.053312 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/0.log" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.056795 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.056943 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.059890 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" event={"ID":"f76abf7f-03d4-496f-b7bd-1bc63e0425e6","Type":"ContainerStarted","Data":"c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.060215 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" event={"ID":"f76abf7f-03d4-496f-b7bd-1bc63e0425e6","Type":"ContainerStarted","Data":"c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.060404 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" event={"ID":"f76abf7f-03d4-496f-b7bd-1bc63e0425e6","Type":"ContainerStarted","Data":"cab41a009d4fd468437c94a6f7170b4437b48d28f158bea74212c5edcb572f75"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.080146 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.095747 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.112435 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.128997 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.138305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.138332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.138343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.138360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.138372 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.144609 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.161431 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.177041 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.195230 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.208181 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.244936 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"sions/factory.go:141\\\\nI0127 06:47:00.881895 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:00.881909 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:00.881916 6068 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:00.881936 6068 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:00.882681 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:00.882714 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:00.882739 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:00.882759 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:00.882779 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:00.882802 6068 factory.go:656] Stopping watch factory\\\\nI0127 06:47:00.882823 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:00.882879 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:00.882895 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:00.882904 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:00.882913 6068 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.246691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.246723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.246733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.246745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.246754 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.255333 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gvx56"] Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.256043 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.256131 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.264492 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.284258 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.297168 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.307388 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.323991 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.337025 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.349813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.349859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.349869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.349889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.349902 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.349920 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.369880 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.413229 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.427325 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjbl8\" (UniqueName: \"kubernetes.io/projected/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-kube-api-access-mjbl8\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.427378 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.428392 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.443663 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.451888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.451912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.451920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.451934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.451943 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.458023 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.469081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.469115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.469126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.469137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.469146 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.479078 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"sions/factory.go:141\\\\nI0127 06:47:00.881895 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:00.881909 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:00.881916 6068 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:00.881936 6068 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:00.882681 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:00.882714 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:00.882739 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:00.882759 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:00.882779 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:00.882802 6068 factory.go:656] Stopping watch factory\\\\nI0127 06:47:00.882823 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:00.882879 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:00.882895 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:00.882904 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:00.882913 6068 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.482570 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.486208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.486252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.486267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.486282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.486291 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.491923 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.499359 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.502470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.502496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.502505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.502520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.502554 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.506250 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.517471 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.526245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.526298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.526313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.526331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.526344 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.528144 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjbl8\" (UniqueName: \"kubernetes.io/projected/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-kube-api-access-mjbl8\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.528263 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.528490 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.528630 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs podName:bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:03.028598461 +0000 UTC m=+44.135565788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs") pod "network-metrics-daemon-gvx56" (UID: "bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.529389 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.540864 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.541945 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.545434 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.545477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.545488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.545506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.545516 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.550447 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjbl8\" (UniqueName: \"kubernetes.io/projected/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-kube-api-access-mjbl8\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.557816 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.560331 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.560490 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.562120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.562260 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.562361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.562467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.562572 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.576022 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.590191 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.603127 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.614890 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.629783 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:02Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.664847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.665124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.665352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.665561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.665730 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.701418 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:12:19.925961552 +0000 UTC Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.746761 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.747071 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.747374 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:02 crc kubenswrapper[4796]: E0127 06:47:02.747763 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.769543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.769588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.769602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.769624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.769637 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.872629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.872674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.872687 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.872708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.872721 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.976101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.976186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.976204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.976229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:02 crc kubenswrapper[4796]: I0127 06:47:02.976269 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:02Z","lastTransitionTime":"2026-01-27T06:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.033053 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:03 crc kubenswrapper[4796]: E0127 06:47:03.033308 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:03 crc kubenswrapper[4796]: E0127 06:47:03.033476 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs podName:bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:04.033442 +0000 UTC m=+45.140409337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs") pod "network-metrics-daemon-gvx56" (UID: "bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.068171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/1.log" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.069408 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/0.log" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.075162 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3" exitCode=1 Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.075722 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.075928 4796 scope.go:117] "RemoveContainer" containerID="5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.077401 4796 scope.go:117] "RemoveContainer" containerID="b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3" Jan 27 06:47:03 crc kubenswrapper[4796]: E0127 06:47:03.077687 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.082695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.082824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.083104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.083144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.083164 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:03Z","lastTransitionTime":"2026-01-27T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.119019 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.140193 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.157649 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.172517 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.186038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.186379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.186406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.186432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.186446 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:03Z","lastTransitionTime":"2026-01-27T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.192075 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.209491 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.225903 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.248258 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.265049 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.284581 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.289822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.289888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.289914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.289943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.289967 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:03Z","lastTransitionTime":"2026-01-27T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.296280 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.312476 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.328184 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.341022 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.364902 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.388345 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"sions/factory.go:141\\\\nI0127 06:47:00.881895 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:00.881909 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:00.881916 6068 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:00.881936 6068 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:00.882681 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:00.882714 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:00.882739 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:00.882759 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:00.882779 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:00.882802 6068 factory.go:656] Stopping watch factory\\\\nI0127 06:47:00.882823 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:00.882879 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:00.882895 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:00.882904 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:00.882913 6068 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.392061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.392135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.392147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.392175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.392189 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:03Z","lastTransitionTime":"2026-01-27T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.402473 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.417216 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.437931 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.458921 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.476719 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.494589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.494815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.494911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.495021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.495111 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:03Z","lastTransitionTime":"2026-01-27T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.501450 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"sions/factory.go:141\\\\nI0127 06:47:00.881895 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:00.881909 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:00.881916 6068 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:00.881936 6068 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:00.882681 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:00.882714 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:00.882739 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:00.882759 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:00.882779 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:00.882802 6068 factory.go:656] Stopping watch factory\\\\nI0127 06:47:00.882823 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:00.882879 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:00.882895 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:00.882904 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:00.882913 6068 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:02.040279 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:02.040323 6258 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.040597 6258 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041398 6258 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041998 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:02.042037 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:02.042074 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:02.042098 6258 factory.go:656] Stopping watch factory\\\\nI0127 06:47:02.042115 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:02.042133 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:02.042158 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.518778 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.533041 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.551802 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.567946 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.578818 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.596974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.597020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.597031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.597045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.597054 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:03Z","lastTransitionTime":"2026-01-27T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.601354 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.614337 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.625116 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.641202 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.660203 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.675265 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.694989 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:03Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.699736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.699801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.699820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.699840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.699855 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:03Z","lastTransitionTime":"2026-01-27T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.702997 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:58:52.724043916 +0000 UTC Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.747027 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.747200 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:03 crc kubenswrapper[4796]: E0127 06:47:03.747295 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:03 crc kubenswrapper[4796]: E0127 06:47:03.747437 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.803462 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.803594 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.803624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.803661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.803684 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:03Z","lastTransitionTime":"2026-01-27T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.907464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.907513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.907553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.907572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:03 crc kubenswrapper[4796]: I0127 06:47:03.907585 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:03Z","lastTransitionTime":"2026-01-27T06:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.010387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.010453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.010468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.010486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.010499 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.046687 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:04 crc kubenswrapper[4796]: E0127 06:47:04.046983 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:04 crc kubenswrapper[4796]: E0127 06:47:04.047134 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs podName:bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:06.047100219 +0000 UTC m=+47.154067646 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs") pod "network-metrics-daemon-gvx56" (UID: "bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.082294 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/1.log" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.113419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.113461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.113473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.113490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.113502 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.217013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.217078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.217095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.217120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.217143 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.319789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.319866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.319891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.319921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.319943 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.423405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.423467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.423490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.423519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.423589 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.527100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.527166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.527185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.527207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.527224 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.634696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.634736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.634747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.634765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.634777 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.703854 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:49:24.565668164 +0000 UTC Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.737474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.737524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.737571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.737593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.737609 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.747026 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.747137 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:04 crc kubenswrapper[4796]: E0127 06:47:04.747183 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:04 crc kubenswrapper[4796]: E0127 06:47:04.747270 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.840897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.840960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.840972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.840992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.841005 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.944429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.944485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.944501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.944525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:04 crc kubenswrapper[4796]: I0127 06:47:04.944574 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:04Z","lastTransitionTime":"2026-01-27T06:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.047135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.047166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.047174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.047188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.047196 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.150008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.150065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.150078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.150102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.150117 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.253677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.253725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.253735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.253750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.253760 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.356597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.356674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.356728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.356760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.356782 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.459605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.459654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.459667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.459688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.459700 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.561751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.561812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.561830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.561856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.561873 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.664569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.664948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.665034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.665111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.665220 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.704422 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:35:15.391011633 +0000 UTC Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.747102 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.747112 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:05 crc kubenswrapper[4796]: E0127 06:47:05.747324 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:05 crc kubenswrapper[4796]: E0127 06:47:05.747401 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.768919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.769163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.769230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.769290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.769357 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.871875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.871937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.871954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.871977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.871994 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.974408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.974490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.974519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.974598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:05 crc kubenswrapper[4796]: I0127 06:47:05.974623 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:05Z","lastTransitionTime":"2026-01-27T06:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.065139 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:06 crc kubenswrapper[4796]: E0127 06:47:06.065351 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:06 crc kubenswrapper[4796]: E0127 06:47:06.065418 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs podName:bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:10.065395608 +0000 UTC m=+51.172362975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs") pod "network-metrics-daemon-gvx56" (UID: "bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.076894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.076966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.076984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.077012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.077031 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:06Z","lastTransitionTime":"2026-01-27T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.180678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.180750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.180768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.180795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.180816 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:06Z","lastTransitionTime":"2026-01-27T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.283872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.283945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.283968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.283998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.284025 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:06Z","lastTransitionTime":"2026-01-27T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.386656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.386699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.386713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.386733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.386747 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:06Z","lastTransitionTime":"2026-01-27T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.490697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.490859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.490875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.490900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.490920 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:06Z","lastTransitionTime":"2026-01-27T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.595670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.595747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.595764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.596325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.596399 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:06Z","lastTransitionTime":"2026-01-27T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.699124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.699206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.699221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.699267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.699282 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:06Z","lastTransitionTime":"2026-01-27T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.705257 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:08:33.275375068 +0000 UTC Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.746668 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.746863 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:06 crc kubenswrapper[4796]: E0127 06:47:06.747015 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:06 crc kubenswrapper[4796]: E0127 06:47:06.747260 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.801812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.801867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.801880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.801899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.801914 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:06Z","lastTransitionTime":"2026-01-27T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.904019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.904070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.904086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.904104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:06 crc kubenswrapper[4796]: I0127 06:47:06.904119 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:06Z","lastTransitionTime":"2026-01-27T06:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.007162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.007230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.007243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.007261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.007274 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.110906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.110961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.110974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.110992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.111006 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.214689 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.214752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.214772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.214797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.214817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.318169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.318220 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.318233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.318256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.318267 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.422023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.422086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.422112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.422148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.422169 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.526579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.527040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.527213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.527431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.527640 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.630741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.630983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.631069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.631149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.631220 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.706119 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 00:41:25.80724156 +0000 UTC Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.735033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.735080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.735089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.735110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.735123 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.746189 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:07 crc kubenswrapper[4796]: E0127 06:47:07.746381 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.746864 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:07 crc kubenswrapper[4796]: E0127 06:47:07.747092 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.837341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.837417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.837447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.837478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.837505 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.939825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.939891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.939908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.939936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:07 crc kubenswrapper[4796]: I0127 06:47:07.939953 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:07Z","lastTransitionTime":"2026-01-27T06:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.043036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.043100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.043119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.043144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.043161 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.146278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.146349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.146368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.146392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.146409 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.249250 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.249328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.249356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.249388 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.249413 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.352668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.352737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.352759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.352787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.352810 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.456330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.456424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.456436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.456461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.456474 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.558817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.558861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.558872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.558886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.558895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.662207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.662264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.662274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.662296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.662327 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.707106 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:08:39.364760322 +0000 UTC Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.746929 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:08 crc kubenswrapper[4796]: E0127 06:47:08.747136 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.747725 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:08 crc kubenswrapper[4796]: E0127 06:47:08.747927 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.749206 4796 scope.go:117] "RemoveContainer" containerID="46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.765996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.766073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.766098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.766130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.766154 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.869637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.870086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.870098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.870115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.870125 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.973275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.973320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.973332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.973351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:08 crc kubenswrapper[4796]: I0127 06:47:08.973365 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:08Z","lastTransitionTime":"2026-01-27T06:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.076381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.076458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.076489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.076516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.076574 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:09Z","lastTransitionTime":"2026-01-27T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.106210 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.109105 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.110054 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.132036 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.154329 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.168167 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.179621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.179675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.179692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.179717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.179732 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:09Z","lastTransitionTime":"2026-01-27T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.189201 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.210998 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"sions/factory.go:141\\\\nI0127 06:47:00.881895 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:00.881909 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:00.881916 6068 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:00.881936 6068 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:00.882681 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:00.882714 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:00.882739 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:00.882759 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:00.882779 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:00.882802 6068 factory.go:656] Stopping watch factory\\\\nI0127 06:47:00.882823 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:00.882879 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:00.882895 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:00.882904 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:00.882913 6068 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:02.040279 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:02.040323 6258 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.040597 6258 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041398 6258 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041998 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:02.042037 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:02.042074 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:02.042098 6258 factory.go:656] Stopping watch factory\\\\nI0127 06:47:02.042115 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:02.042133 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:02.042158 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.225235 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.236165 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.248682 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.263418 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.274451 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.282401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.282429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.282440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.282458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.282470 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:09Z","lastTransitionTime":"2026-01-27T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.296260 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.310233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.328296 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.344270 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.357595 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.371395 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.386399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.386445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.386457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.386474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.386486 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:09Z","lastTransitionTime":"2026-01-27T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.387778 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:09Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.489147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.489491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.489639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.489763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.489892 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:09Z","lastTransitionTime":"2026-01-27T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.592403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.592457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.592468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.592484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.592495 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:09Z","lastTransitionTime":"2026-01-27T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.695332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.695388 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.695407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.695431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.695490 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:09Z","lastTransitionTime":"2026-01-27T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.707894 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 03:51:46.505461697 +0000 UTC Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.746192 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:09 crc kubenswrapper[4796]: E0127 06:47:09.746371 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.746207 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:09 crc kubenswrapper[4796]: E0127 06:47:09.746997 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.798028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.798058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.798066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.798078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.798086 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:09Z","lastTransitionTime":"2026-01-27T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.901087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.901166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.901188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.901213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:09 crc kubenswrapper[4796]: I0127 06:47:09.901231 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:09Z","lastTransitionTime":"2026-01-27T06:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.003581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.003618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.003629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.003645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.003656 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.106644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.106709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.106726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.106749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.106766 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.109015 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:10 crc kubenswrapper[4796]: E0127 06:47:10.109221 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:10 crc kubenswrapper[4796]: E0127 06:47:10.109307 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs podName:bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:18.109280682 +0000 UTC m=+59.216248039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs") pod "network-metrics-daemon-gvx56" (UID: "bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.209787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.209849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.209866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.209894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.209914 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.313019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.313067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.313076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.313089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.313099 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.414923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.414993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.415015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.415042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.415060 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.518196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.518257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.518273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.518296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.518313 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.621572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.621625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.621637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.621654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.621675 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.708944 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:48:09.378183078 +0000 UTC Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.724024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.724061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.724073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.724088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.724101 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.746766 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:10 crc kubenswrapper[4796]: E0127 06:47:10.746889 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.746983 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:10 crc kubenswrapper[4796]: E0127 06:47:10.747180 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.767762 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.786600 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.804468 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.823258 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.827029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.827135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.827153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.827175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.827194 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.839987 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.854107 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.868854 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.884197 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.900956 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.920913 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.929827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.929898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.929921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.930107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.930286 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:10Z","lastTransitionTime":"2026-01-27T06:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.946571 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5056e562fc8402878999be6871b1600c19d06378c0e312b483fb6f6b95022a73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"message\\\":\\\"sions/factory.go:141\\\\nI0127 06:47:00.881895 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0127 06:47:00.881909 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0127 06:47:00.881916 6068 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:00.881936 6068 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0127 06:47:00.882681 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:00.882714 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:00.882739 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:00.882759 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0127 06:47:00.882779 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0127 06:47:00.882802 6068 factory.go:656] Stopping watch factory\\\\nI0127 06:47:00.882823 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:00.882879 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:00.882895 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 06:47:00.882904 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0127 06:47:00.882913 6068 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:02.040279 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:02.040323 6258 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.040597 6258 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041398 6258 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041998 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:02.042037 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:02.042074 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:02.042098 6258 factory.go:656] Stopping watch factory\\\\nI0127 06:47:02.042115 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:02.042133 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:02.042158 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.960654 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.973654 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:10 crc kubenswrapper[4796]: I0127 06:47:10.987009 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.001248 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:10Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.015807 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.033124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.033172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.033181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.033199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.033212 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.041355 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:11Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.136482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.136803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.137032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.137151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.137264 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.239942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.240014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.240033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.240068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.240089 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.342508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.342588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.342604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.342624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.342636 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.445402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.445743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.445860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.445943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.446016 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.549289 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.549712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.549910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.550112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.550270 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.653502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.653547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.653555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.653568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.653577 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.709500 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 07:35:57.815630121 +0000 UTC Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.746864 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.746981 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:11 crc kubenswrapper[4796]: E0127 06:47:11.747059 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:11 crc kubenswrapper[4796]: E0127 06:47:11.747158 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.756020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.756090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.756109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.756138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.756156 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.859501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.859601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.859647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.859705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.859728 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.962472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.962642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.962665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.962690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:11 crc kubenswrapper[4796]: I0127 06:47:11.962708 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:11Z","lastTransitionTime":"2026-01-27T06:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.065409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.065456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.065468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.065487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.065501 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.168807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.168911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.168934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.168971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.168993 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.271929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.272000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.272024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.272057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.272081 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.375089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.375146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.375161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.375183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.375197 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.477601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.477685 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.477709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.477735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.477759 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.580078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.580133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.580144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.580157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.580166 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.682776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.682856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.682879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.682908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.682930 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.710389 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:04:25.200324257 +0000 UTC Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.725001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.725039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.725050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.725066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.725077 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: E0127 06:47:12.741280 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.745575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.745633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.745649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.745672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.745687 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.746167 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.746181 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:12 crc kubenswrapper[4796]: E0127 06:47:12.746261 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:12 crc kubenswrapper[4796]: E0127 06:47:12.746327 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:12 crc kubenswrapper[4796]: E0127 06:47:12.766586 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.772498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.772601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.772627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.772659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.772683 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: E0127 06:47:12.792020 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.795968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.796004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.796012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.796038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.796048 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: E0127 06:47:12.805805 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.808641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.808670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.808681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.808696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.808708 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: E0127 06:47:12.821776 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:12Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:12 crc kubenswrapper[4796]: E0127 06:47:12.821910 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.823178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.823204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.823215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.823230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.823243 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.926249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.926585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.926876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.927142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:12 crc kubenswrapper[4796]: I0127 06:47:12.927387 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:12Z","lastTransitionTime":"2026-01-27T06:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.030700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.031159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.031313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.031510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.031779 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.134025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.134077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.134095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.134117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.134134 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.237254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.237322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.237334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.237352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.237364 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.341126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.341221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.341251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.341287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.341311 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.443514 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.443575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.443583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.443597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.443605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.546008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.546054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.546070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.546093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.546109 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.648648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.648723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.648745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.648773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.648795 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.711325 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 05:24:09.000127065 +0000 UTC Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.717737 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.719362 4796 scope.go:117] "RemoveContainer" containerID="b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.737367 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.746412 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:13 crc kubenswrapper[4796]: E0127 06:47:13.746594 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.746723 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:13 crc kubenswrapper[4796]: E0127 06:47:13.746942 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.753148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.753190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.753201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.753219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.753233 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.754758 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.771629 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.797887 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:02.040279 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:02.040323 6258 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.040597 6258 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041398 6258 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041998 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:02.042037 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:02.042074 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:02.042098 6258 factory.go:656] Stopping watch factory\\\\nI0127 06:47:02.042115 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:02.042133 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:02.042158 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.811221 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.825295 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.844282 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.855811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.855883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.855896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.855912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.855925 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.856699 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.871894 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.896686 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.911248 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.925041 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.940648 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.950304 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.957778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.957832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.957844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.957859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.957868 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:13Z","lastTransitionTime":"2026-01-27T06:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.960964 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.977919 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:13 crc kubenswrapper[4796]: I0127 06:47:13.991690 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:13Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.060171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.060204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.060213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.060228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.060239 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.130977 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/1.log" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.134316 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.134898 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.157779 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.162883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.162923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.162936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.162953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.162966 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.176427 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.197973 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.212706 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.233915 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:02.040279 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:02.040323 6258 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.040597 6258 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041398 6258 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041998 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:02.042037 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:02.042074 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:02.042098 6258 factory.go:656] Stopping watch factory\\\\nI0127 06:47:02.042115 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:02.042133 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:02.042158 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.245826 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.257427 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.264942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.264992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.265007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.265027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.265044 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.280967 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.292176 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.304875 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.321094 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.336752 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.349576 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.362334 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.366910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.366946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.366960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.366975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.366988 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.372560 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.386642 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.412930 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:14Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.469343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.469383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.469392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.469406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.469415 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.572089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.572135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.572146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.572163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.572173 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.674765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.674874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.674890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.674909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.675303 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.712566 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 15:20:56.156040954 +0000 UTC Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.746160 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:14 crc kubenswrapper[4796]: E0127 06:47:14.746325 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.746950 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:14 crc kubenswrapper[4796]: E0127 06:47:14.747195 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.779037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.779088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.779128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.779173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.779196 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.880711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.880748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.880758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.880773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.880784 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.983133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.983179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.983189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.983204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:14 crc kubenswrapper[4796]: I0127 06:47:14.983214 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:14Z","lastTransitionTime":"2026-01-27T06:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.085477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.085524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.085553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.085572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.085583 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:15Z","lastTransitionTime":"2026-01-27T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.147939 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/2.log" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.149130 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/1.log" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.156002 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f" exitCode=1 Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.156076 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.156137 4796 scope.go:117] "RemoveContainer" containerID="b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.157774 4796 scope.go:117] "RemoveContainer" containerID="a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f" Jan 27 06:47:15 crc kubenswrapper[4796]: E0127 06:47:15.158283 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.183656 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.189137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.189199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.189221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.189248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.189267 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:15Z","lastTransitionTime":"2026-01-27T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.207176 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.227522 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.242856 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.260010 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.280470 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:02.040279 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:02.040323 6258 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.040597 6258 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041398 6258 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041998 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:02.042037 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:02.042074 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:02.042098 6258 factory.go:656] Stopping watch factory\\\\nI0127 06:47:02.042115 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:02.042133 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:02.042158 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.291705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.291681 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.291758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.291768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.291814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.291823 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:15Z","lastTransitionTime":"2026-01-27T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.302017 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.314926 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.325807 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.336073 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.359407 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.372707 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.388830 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.394024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.394061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.394070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.394086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.394096 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:15Z","lastTransitionTime":"2026-01-27T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.401421 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.412090 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.423330 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.496588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.496664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.496675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.496691 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.496705 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:15Z","lastTransitionTime":"2026-01-27T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.600017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.600099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.600125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.600154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.600175 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:15Z","lastTransitionTime":"2026-01-27T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.702907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.702945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.702953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.702967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.702975 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:15Z","lastTransitionTime":"2026-01-27T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.713267 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:11:01.260261822 +0000 UTC Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.746856 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.746961 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:15 crc kubenswrapper[4796]: E0127 06:47:15.746988 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:15 crc kubenswrapper[4796]: E0127 06:47:15.747138 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.805864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.805916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.805930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.805948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.805963 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:15Z","lastTransitionTime":"2026-01-27T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.908308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.908716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.908732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.908754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.908769 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:15Z","lastTransitionTime":"2026-01-27T06:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.986677 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 27 06:47:15 crc kubenswrapper[4796]: I0127 06:47:15.995516 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.000397 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:15Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.010552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.010589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.010616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.010631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.010641 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.011489 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.021635 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.033371 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.044498 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.057201 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.069702 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.092319 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6cf83bea211b2b4299e90ab14f3afe21d0f1fc909cd5c1dad4a163574e382d3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0127 06:47:02.040279 6258 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0127 06:47:02.040323 6258 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.040597 6258 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041398 6258 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0127 06:47:02.041998 6258 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0127 06:47:02.042037 6258 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0127 06:47:02.042074 6258 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0127 06:47:02.042098 6258 factory.go:656] Stopping watch factory\\\\nI0127 06:47:02.042115 6258 ovnkube.go:599] Stopped ovnkube\\\\nI0127 06:47:02.042133 6258 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0127 06:47:02.042158 6258 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0127 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.104225 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.113777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.113826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.113840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.113857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.113869 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.120595 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.136915 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.155999 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.161405 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/2.log" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.165451 4796 scope.go:117] "RemoveContainer" containerID="a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f" Jan 27 06:47:16 crc kubenswrapper[4796]: E0127 06:47:16.165636 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.172354 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.184986 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.206381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.216462 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.216500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.216513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.216557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.216575 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.217711 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.227296 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.248104 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.259434 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.269371 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.278516 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.290219 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.300156 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.310947 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.319166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.319225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.319282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.319315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.319336 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.321845 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.334778 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.351527 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.371344 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.390811 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.407631 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.421805 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.422694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.422753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.422770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.422794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.422810 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.437407 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.465362 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.481243 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.492748 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:16Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.525863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.525907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.525919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.525936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.525949 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.629235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.629309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.629326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.629353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.629372 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.713626 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:03:59.408737649 +0000 UTC Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.732188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.732450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.732651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.732831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.732971 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.746498 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.746577 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:16 crc kubenswrapper[4796]: E0127 06:47:16.746960 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:16 crc kubenswrapper[4796]: E0127 06:47:16.747080 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.836329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.836427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.836456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.836488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.836514 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.939922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.939975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.939985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.940002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:16 crc kubenswrapper[4796]: I0127 06:47:16.940014 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:16Z","lastTransitionTime":"2026-01-27T06:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.042663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.042718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.042735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.042757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.042774 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.145839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.145870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.145902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.145918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.145927 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.251116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.251168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.251178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.251194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.251205 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.354033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.354068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.354079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.354093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.354104 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.393831 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.393967 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.394004 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:47:49.393984133 +0000 UTC m=+90.500951470 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.394046 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.394062 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.394082 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:49.394072555 +0000 UTC m=+90.501039882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.394144 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.394178 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:49.394168487 +0000 UTC m=+90.501135814 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.456980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.457043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.457064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.457090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.457107 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.495316 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.495462 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.495510 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.495581 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.495601 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.495686 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:49.495660652 +0000 UTC m=+90.602628019 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.495782 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.495836 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.495863 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.495975 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:49.49594088 +0000 UTC m=+90.602908247 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.560818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.560916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.560926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.560960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.560975 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.663712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.663806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.663848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.663893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.663917 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.713804 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:23:14.405624002 +0000 UTC Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.746306 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.746349 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.746464 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:17 crc kubenswrapper[4796]: E0127 06:47:17.746578 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.765753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.765798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.765810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.765827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.765838 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.868866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.868922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.868943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.868972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.868995 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.971581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.971619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.971628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.971644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:17 crc kubenswrapper[4796]: I0127 06:47:17.971656 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:17Z","lastTransitionTime":"2026-01-27T06:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.074120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.074182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.074206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.074234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.074252 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:18Z","lastTransitionTime":"2026-01-27T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.177093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.177133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.177143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.177159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.177172 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:18Z","lastTransitionTime":"2026-01-27T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.205695 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:18 crc kubenswrapper[4796]: E0127 06:47:18.205916 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:18 crc kubenswrapper[4796]: E0127 06:47:18.206008 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs podName:bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09 nodeName:}" failed. No retries permitted until 2026-01-27 06:47:34.205984835 +0000 UTC m=+75.312952162 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs") pod "network-metrics-daemon-gvx56" (UID: "bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.280274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.280318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.280327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.280362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.280376 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:18Z","lastTransitionTime":"2026-01-27T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.382978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.383028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.383038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.383057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.383070 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:18Z","lastTransitionTime":"2026-01-27T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.486719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.486774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.486800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.486830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.486853 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:18Z","lastTransitionTime":"2026-01-27T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.592194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.592253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.592264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.592288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.592304 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:18Z","lastTransitionTime":"2026-01-27T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.695941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.696000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.696012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.696039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.696054 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:18Z","lastTransitionTime":"2026-01-27T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.714429 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:02:51.900107107 +0000 UTC Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.747063 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.747131 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:18 crc kubenswrapper[4796]: E0127 06:47:18.747307 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:18 crc kubenswrapper[4796]: E0127 06:47:18.747554 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.800201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.800373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.800390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.800445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.800461 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:18Z","lastTransitionTime":"2026-01-27T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.903755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.903791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.903801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.903815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:18 crc kubenswrapper[4796]: I0127 06:47:18.903824 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:18Z","lastTransitionTime":"2026-01-27T06:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.007240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.007289 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.007302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.007322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.007337 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.110092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.110157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.110171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.110192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.110205 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.215818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.215875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.215886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.215907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.215921 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.319773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.319831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.319853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.319878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.319894 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.421757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.421804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.421815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.421831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.421843 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.524198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.524270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.524282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.524304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.524315 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.627041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.627091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.627103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.627120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.627133 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.715600 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:38:53.808204105 +0000 UTC Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.729831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.729881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.729897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.729918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.729934 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.746863 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:19 crc kubenswrapper[4796]: E0127 06:47:19.747052 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.746887 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:19 crc kubenswrapper[4796]: E0127 06:47:19.747440 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.832393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.832420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.832427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.832440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.832448 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.935166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.935212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.935223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.935237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:19 crc kubenswrapper[4796]: I0127 06:47:19.935248 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:19Z","lastTransitionTime":"2026-01-27T06:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.037985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.038022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.038032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.038048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.038059 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.140749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.140803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.140815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.140831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.140841 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.243354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.243418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.243440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.243461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.243474 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.345323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.345356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.345368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.345382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.345392 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.447841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.447886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.447896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.447909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.447919 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.550662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.550710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.550722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.550738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.550750 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.654150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.654185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.654194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.654208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.654217 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.716576 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:59:42.840261586 +0000 UTC Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.746173 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.746281 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:20 crc kubenswrapper[4796]: E0127 06:47:20.746547 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:20 crc kubenswrapper[4796]: E0127 06:47:20.746680 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.757823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.757883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.757894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.757917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.757936 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.772129 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.787001 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.799791 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.810769 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.822515 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.836336 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.848259 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.860868 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.861106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.861137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.861146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.861166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.861178 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.871905 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.885205 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.902687 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.915727 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.928261 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.945468 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.962613 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.963153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.963188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.963199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.963215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.963225 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:20Z","lastTransitionTime":"2026-01-27T06:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.975520 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:20 crc kubenswrapper[4796]: I0127 06:47:20.989012 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:20Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.008140 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:21Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.066705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.066744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.066755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.066770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.066781 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.170039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.170091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.170111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.170135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.170152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.271714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.271767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.271778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.271793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.271803 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.373664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.373759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.373769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.373783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.373793 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.477297 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.477650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.477787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.477882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.477975 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.580795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.580834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.580846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.580859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.580868 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.683463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.683517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.683564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.683587 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.683605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.717706 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:07:47.812136106 +0000 UTC Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.746802 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.746951 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:21 crc kubenswrapper[4796]: E0127 06:47:21.747089 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:21 crc kubenswrapper[4796]: E0127 06:47:21.747319 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.786627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.786686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.786709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.786734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.786753 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.889328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.889385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.889396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.889418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.889433 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.992567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.992929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.992952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.992970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:21 crc kubenswrapper[4796]: I0127 06:47:21.992987 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:21Z","lastTransitionTime":"2026-01-27T06:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.095695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.095751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.095762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.095784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.095796 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:22Z","lastTransitionTime":"2026-01-27T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.199110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.199317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.199351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.199381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.199404 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:22Z","lastTransitionTime":"2026-01-27T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.303211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.303277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.303295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.303327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.303349 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:22Z","lastTransitionTime":"2026-01-27T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.406525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.406604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.406624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.406649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.406665 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:22Z","lastTransitionTime":"2026-01-27T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.510323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.510752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.510862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.510930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.511000 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:22Z","lastTransitionTime":"2026-01-27T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.614360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.614437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.614461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.614493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.614515 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:22Z","lastTransitionTime":"2026-01-27T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.717759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.717813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.717824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.717847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.717859 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:22Z","lastTransitionTime":"2026-01-27T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.718879 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:35:48.794241304 +0000 UTC Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.747016 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.747019 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:22 crc kubenswrapper[4796]: E0127 06:47:22.747172 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:22 crc kubenswrapper[4796]: E0127 06:47:22.747363 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.821033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.821123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.821145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.821175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.821192 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:22Z","lastTransitionTime":"2026-01-27T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.924499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.924587 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.924608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.924633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:22 crc kubenswrapper[4796]: I0127 06:47:22.924649 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:22Z","lastTransitionTime":"2026-01-27T06:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.027381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.027415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.027423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.027441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.027458 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.030036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.030110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.030141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.030153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.030164 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: E0127 06:47:23.042566 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.046610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.046676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.046700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.046731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.046754 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: E0127 06:47:23.063770 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.067831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.067893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.067915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.067939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.067957 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: E0127 06:47:23.086495 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.091253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.091300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.091327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.091345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.091357 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: E0127 06:47:23.103913 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.108815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.108850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.108862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.108877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.108891 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: E0127 06:47:23.123369 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:23Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:23 crc kubenswrapper[4796]: E0127 06:47:23.123480 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.130243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.130273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.130282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.130298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.130311 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.232280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.232320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.232332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.232347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.232395 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.335504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.335604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.335621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.335644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.335656 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.438267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.438318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.438331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.438382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.438396 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.540893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.540935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.540946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.540963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.540976 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.644375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.644416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.644426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.644441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.644450 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.719616 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:31:29.415678219 +0000 UTC Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.746099 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.746141 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:23 crc kubenswrapper[4796]: E0127 06:47:23.746262 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:23 crc kubenswrapper[4796]: E0127 06:47:23.746439 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.747978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.748015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.748027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.748044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.748057 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.850634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.850713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.850736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.850767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.850789 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.953128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.953188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.953202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.953226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:23 crc kubenswrapper[4796]: I0127 06:47:23.953243 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:23Z","lastTransitionTime":"2026-01-27T06:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.056766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.056819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.056832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.056855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.056867 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.159281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.159336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.159358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.159384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.159403 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.261708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.261753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.261764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.261780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.261791 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.365122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.365180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.365191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.365211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.365228 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.467717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.467776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.467784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.467804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.467814 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.570330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.570406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.570429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.570456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.570472 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.673073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.673113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.673132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.673148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.673157 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.720701 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:10:13.002950506 +0000 UTC Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.746216 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.746277 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:24 crc kubenswrapper[4796]: E0127 06:47:24.746439 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:24 crc kubenswrapper[4796]: E0127 06:47:24.746621 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.775557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.775591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.775600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.775614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.775625 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.879224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.879279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.879291 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.879309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.879334 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.982694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.982738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.982747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.982771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:24 crc kubenswrapper[4796]: I0127 06:47:24.982788 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:24Z","lastTransitionTime":"2026-01-27T06:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.086653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.086724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.086747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.086806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.086828 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:25Z","lastTransitionTime":"2026-01-27T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.189229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.189285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.189301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.189322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.189337 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:25Z","lastTransitionTime":"2026-01-27T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.282296 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.291699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.291742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.291754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.291771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.291782 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:25Z","lastTransitionTime":"2026-01-27T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.306091 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.325643 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.340441 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.359505 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.383073 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.394763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.394806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.394815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.394830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.394841 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:25Z","lastTransitionTime":"2026-01-27T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.395861 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.407796 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.428638 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.441467 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.456716 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.470864 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.485847 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.497755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.497803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.497816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.497839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.497856 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:25Z","lastTransitionTime":"2026-01-27T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.504867 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.523793 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.534944 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.550152 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.567292 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.582811 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:25Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.601390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.601442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.601455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.601477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.601489 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:25Z","lastTransitionTime":"2026-01-27T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.704443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.704738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.704814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.704891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.704965 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:25Z","lastTransitionTime":"2026-01-27T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.721013 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 02:16:33.146974509 +0000 UTC Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.747076 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:25 crc kubenswrapper[4796]: E0127 06:47:25.747369 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.747212 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:25 crc kubenswrapper[4796]: E0127 06:47:25.747578 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.807697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.807764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.807777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.807817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.807830 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:25Z","lastTransitionTime":"2026-01-27T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.910520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.910817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.910919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.911007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:25 crc kubenswrapper[4796]: I0127 06:47:25.911093 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:25Z","lastTransitionTime":"2026-01-27T06:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.014303 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.014539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.014656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.014753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.014818 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.117649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.117708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.117722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.117740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.117750 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.220378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.220416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.220427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.220444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.220455 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.322936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.323183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.323266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.323356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.323435 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.425735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.426041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.426210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.426368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.426499 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.528467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.528511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.528522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.528553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.528565 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.632887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.632925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.632936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.632956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.632971 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.722428 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:16:29.360933481 +0000 UTC Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.735056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.735090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.735101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.735117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.735126 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.746955 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:26 crc kubenswrapper[4796]: E0127 06:47:26.747051 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.747298 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:26 crc kubenswrapper[4796]: E0127 06:47:26.747612 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.837557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.837616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.837633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.837656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.837668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.939909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.939946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.939955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.939968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:26 crc kubenswrapper[4796]: I0127 06:47:26.939977 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:26Z","lastTransitionTime":"2026-01-27T06:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.042686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.043098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.043279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.043432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.043607 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.146204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.146283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.146306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.146339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.146363 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.249221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.249456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.249574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.249667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.249830 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.352061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.352099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.352111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.352124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.352134 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.455318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.455360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.455371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.455389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.455400 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.557920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.557956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.557965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.557978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.557988 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.660614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.660645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.660653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.660667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.660676 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.722595 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:01:54.495003914 +0000 UTC Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.746074 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.746121 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:27 crc kubenswrapper[4796]: E0127 06:47:27.746209 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:27 crc kubenswrapper[4796]: E0127 06:47:27.746597 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.762990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.763017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.763026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.763038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.763046 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.865094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.865137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.865150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.865167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.865179 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.968251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.968285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.968296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.968314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:27 crc kubenswrapper[4796]: I0127 06:47:27.968365 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:27Z","lastTransitionTime":"2026-01-27T06:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.073608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.073709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.073719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.073740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.073750 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.176627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.176677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.176688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.176707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.176721 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.279179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.279221 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.279229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.279244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.279253 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.382386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.382421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.382430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.382443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.382452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.485108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.485148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.485159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.485174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.485185 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.587844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.587889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.587899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.587914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.587924 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.689950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.690032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.690046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.690066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.690078 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.723545 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:54:37.826598936 +0000 UTC Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.746987 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.747044 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:28 crc kubenswrapper[4796]: E0127 06:47:28.747117 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:28 crc kubenswrapper[4796]: E0127 06:47:28.747177 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.792518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.792597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.792607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.792620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.792630 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.895984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.896029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.896041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.896058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.896070 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.998917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.998955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.998963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.998977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:28 crc kubenswrapper[4796]: I0127 06:47:28.998986 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:28Z","lastTransitionTime":"2026-01-27T06:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.101794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.101844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.101857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.101877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.101891 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:29Z","lastTransitionTime":"2026-01-27T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.204799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.204839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.204848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.204864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.204874 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:29Z","lastTransitionTime":"2026-01-27T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.307367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.307450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.307471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.307495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.307512 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:29Z","lastTransitionTime":"2026-01-27T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.410980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.411025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.411038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.411060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.411072 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:29Z","lastTransitionTime":"2026-01-27T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.514605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.515073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.515153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.515242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.515341 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:29Z","lastTransitionTime":"2026-01-27T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.618223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.618262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.618276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.618293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.618307 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:29Z","lastTransitionTime":"2026-01-27T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.719984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.720021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.720033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.720048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.720059 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:29Z","lastTransitionTime":"2026-01-27T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.724313 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:37:57.361504221 +0000 UTC Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.746869 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.746953 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:29 crc kubenswrapper[4796]: E0127 06:47:29.747062 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:29 crc kubenswrapper[4796]: E0127 06:47:29.747149 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.821902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.821957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.821972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.821992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.822004 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:29Z","lastTransitionTime":"2026-01-27T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.925392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.925470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.925486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.925513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:29 crc kubenswrapper[4796]: I0127 06:47:29.925528 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:29Z","lastTransitionTime":"2026-01-27T06:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.028104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.028135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.028146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.028161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.028171 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.130690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.130778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.130819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.130845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.130888 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.234571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.234644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.234658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.234684 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.234699 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.338027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.338099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.338111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.338137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.338149 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.441400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.441471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.441485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.441507 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.441520 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.544363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.544809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.544950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.545050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.545137 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.647952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.647997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.648011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.648032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.648050 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.724672 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:41:38.199361945 +0000 UTC Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.746783 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:30 crc kubenswrapper[4796]: E0127 06:47:30.747006 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.747584 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:30 crc kubenswrapper[4796]: E0127 06:47:30.747699 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.752103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.752173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.752188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.752213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.752235 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.753989 4796 scope.go:117] "RemoveContainer" containerID="a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f" Jan 27 06:47:30 crc kubenswrapper[4796]: E0127 06:47:30.754455 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.762254 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.777444 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.794069 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.812417 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.830231 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.846702 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.856019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.856064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.856080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.856101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.856113 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.862913 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.882898 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.893074 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.904442 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.919783 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.937242 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.953639 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.958679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.958857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.958928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.958995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.959053 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:30Z","lastTransitionTime":"2026-01-27T06:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.973105 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:30 crc kubenswrapper[4796]: I0127 06:47:30.992946 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:30Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.015005 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.033350 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.050750 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:31Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.061906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.061966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.061978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.061998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.062033 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.165621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.165686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.165704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.165731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.165751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.268288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.268598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.268715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.268829 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.268919 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.371309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.371366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.371380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.371401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.371417 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.473368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.473414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.473426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.473441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.473452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.576567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.576610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.576620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.576635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.576645 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.678794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.678835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.678845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.678863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.678875 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.725321 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:21:15.56167489 +0000 UTC Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.746895 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.746943 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:31 crc kubenswrapper[4796]: E0127 06:47:31.747138 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:31 crc kubenswrapper[4796]: E0127 06:47:31.747305 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.781692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.781731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.781741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.781758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.781768 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.884238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.884285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.884295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.884315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.884326 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.986900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.986939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.986948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.986961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:31 crc kubenswrapper[4796]: I0127 06:47:31.986970 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:31Z","lastTransitionTime":"2026-01-27T06:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.089679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.089735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.089748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.089768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.089779 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:32Z","lastTransitionTime":"2026-01-27T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.192432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.192465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.192473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.192486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.192496 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:32Z","lastTransitionTime":"2026-01-27T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.295620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.295663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.295672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.295688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.295701 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:32Z","lastTransitionTime":"2026-01-27T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.398718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.398781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.398797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.398825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.398845 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:32Z","lastTransitionTime":"2026-01-27T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.502230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.502304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.502324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.502351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.502369 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:32Z","lastTransitionTime":"2026-01-27T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.604575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.604770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.604855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.604918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.604989 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:32Z","lastTransitionTime":"2026-01-27T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.707286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.707321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.707332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.707350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.707362 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:32Z","lastTransitionTime":"2026-01-27T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.725730 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:15:59.460668396 +0000 UTC Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.747195 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.747195 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:32 crc kubenswrapper[4796]: E0127 06:47:32.747410 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:32 crc kubenswrapper[4796]: E0127 06:47:32.747334 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.809295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.809332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.809343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.809359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.809372 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:32Z","lastTransitionTime":"2026-01-27T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.911788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.911844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.911856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.911873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:32 crc kubenswrapper[4796]: I0127 06:47:32.911885 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:32Z","lastTransitionTime":"2026-01-27T06:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.015323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.015474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.015495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.015518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.015564 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.118814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.118885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.118899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.118926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.118944 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.221290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.221335 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.221345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.221362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.221376 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.336973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.337006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.337025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.337041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.337050 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.439566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.439621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.439638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.439663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.439682 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.523348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.523382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.523390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.523404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.523414 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: E0127 06:47:33.539343 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.543488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.543525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.543574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.543591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.543604 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: E0127 06:47:33.555906 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.559910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.559966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.559981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.560002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.560014 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: E0127 06:47:33.573857 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.578954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.579007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.579023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.579049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.579065 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: E0127 06:47:33.593076 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.597086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.597217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.597399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.597599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.597868 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: E0127 06:47:33.608996 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:33Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:33 crc kubenswrapper[4796]: E0127 06:47:33.609133 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.611125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.611161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.611173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.611197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.611215 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.715234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.715283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.715297 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.715313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.715325 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.726505 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:21:21.030889936 +0000 UTC Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.747104 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.747218 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:33 crc kubenswrapper[4796]: E0127 06:47:33.747303 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:33 crc kubenswrapper[4796]: E0127 06:47:33.747454 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.817916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.817979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.817999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.818026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.818044 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.920669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.920722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.920743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.920760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:33 crc kubenswrapper[4796]: I0127 06:47:33.920775 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:33Z","lastTransitionTime":"2026-01-27T06:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.023773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.023848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.023862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.023888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.023908 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.127753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.127809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.127826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.127852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.127869 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.230748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.230797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.230807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.230827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.230838 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.283747 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:34 crc kubenswrapper[4796]: E0127 06:47:34.283941 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:34 crc kubenswrapper[4796]: E0127 06:47:34.284021 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs podName:bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:06.283999783 +0000 UTC m=+107.390967110 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs") pod "network-metrics-daemon-gvx56" (UID: "bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.333919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.333972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.334018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.334039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.334089 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.436898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.436950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.436959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.436971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.436982 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.539654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.540007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.540075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.540152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.540219 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.643018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.643095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.643114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.643141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.643160 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.727617 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:15:00.942675723 +0000 UTC Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.746235 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.746301 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:34 crc kubenswrapper[4796]: E0127 06:47:34.746347 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:34 crc kubenswrapper[4796]: E0127 06:47:34.746474 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.746596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.746652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.746665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.746688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.746703 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.850656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.850750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.850761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.850790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.850803 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.957767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.957828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.957842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.957863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:34 crc kubenswrapper[4796]: I0127 06:47:34.957879 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:34Z","lastTransitionTime":"2026-01-27T06:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.060669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.060730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.060744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.060770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.060783 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.163621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.163660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.163668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.163682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.163691 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.266868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.266912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.266924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.266943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.266951 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.368873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.368926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.368939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.368966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.368977 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.471421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.471449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.471457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.471470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.471480 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.573742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.573828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.573856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.573896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.573919 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.677374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.677415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.677424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.677442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.677452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.728810 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:36:45.790739711 +0000 UTC Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.746643 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:35 crc kubenswrapper[4796]: E0127 06:47:35.746851 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.746914 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:35 crc kubenswrapper[4796]: E0127 06:47:35.747152 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.780811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.780918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.780945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.780978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.780997 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.883231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.883293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.883310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.883337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.883420 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.988012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.988059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.988068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.988088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:35 crc kubenswrapper[4796]: I0127 06:47:35.988099 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:35Z","lastTransitionTime":"2026-01-27T06:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.091145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.091226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.091249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.091276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.091298 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:36Z","lastTransitionTime":"2026-01-27T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.194338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.194373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.194383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.194402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.194413 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:36Z","lastTransitionTime":"2026-01-27T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.297614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.297660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.297677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.297702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.297720 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:36Z","lastTransitionTime":"2026-01-27T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.403004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.403098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.403117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.403144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.403175 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:36Z","lastTransitionTime":"2026-01-27T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.506294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.506361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.506387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.506416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.506437 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:36Z","lastTransitionTime":"2026-01-27T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.609830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.609871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.609880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.609897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.609908 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:36Z","lastTransitionTime":"2026-01-27T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.711936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.711980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.711992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.712019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.712044 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:36Z","lastTransitionTime":"2026-01-27T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.729610 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:14:34.393266588 +0000 UTC Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.747100 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:36 crc kubenswrapper[4796]: E0127 06:47:36.747238 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.747104 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:36 crc kubenswrapper[4796]: E0127 06:47:36.747494 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.814797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.814849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.814865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.814884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.814899 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:36Z","lastTransitionTime":"2026-01-27T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.917694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.917766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.917774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.917789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:36 crc kubenswrapper[4796]: I0127 06:47:36.917797 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:36Z","lastTransitionTime":"2026-01-27T06:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.021453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.021519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.021575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.021607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.021629 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.125692 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.125763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.125787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.125831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.125853 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.227404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.227446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.227458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.227475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.227485 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.228218 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/0.log" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.228276 4796 generic.go:334] "Generic (PLEG): container finished" podID="b3555bc2-e335-4479-8b6f-8b5970b27a25" containerID="ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d" exitCode=1 Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.228307 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46ql2" event={"ID":"b3555bc2-e335-4479-8b6f-8b5970b27a25","Type":"ContainerDied","Data":"ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.228713 4796 scope.go:117] "RemoveContainer" containerID="ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.246321 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.267622 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.290288 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.308615 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.323142 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.331084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.331119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.331132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.331151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.331163 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.341010 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.361660 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.380448 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.399935 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.416510 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:36Z\\\",\\\"message\\\":\\\"2026-01-27T06:46:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708\\\\n2026-01-27T06:46:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708 to /host/opt/cni/bin/\\\\n2026-01-27T06:46:51Z [verbose] multus-daemon started\\\\n2026-01-27T06:46:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:47:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.433885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.433935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.433948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.433965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.433978 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.439498 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.456179 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.469011 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.491724 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.507396 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.521206 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.537505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.537579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.537593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.537610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.537622 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.546749 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.560513 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:37Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.640292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.640334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.640346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.640364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.640378 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.730458 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:27:09.861925545 +0000 UTC Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.743202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.743681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.743906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.744133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.744354 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.746658 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.746685 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:37 crc kubenswrapper[4796]: E0127 06:47:37.746784 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:37 crc kubenswrapper[4796]: E0127 06:47:37.746953 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.847387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.847699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.847910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.848136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.848352 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.952181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.952256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.952276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.952307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:37 crc kubenswrapper[4796]: I0127 06:47:37.952326 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:37Z","lastTransitionTime":"2026-01-27T06:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.057018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.057079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.057097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.057121 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.057138 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.160401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.160467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.160484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.160510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.160529 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.235833 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/0.log" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.236464 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46ql2" event={"ID":"b3555bc2-e335-4479-8b6f-8b5970b27a25","Type":"ContainerStarted","Data":"f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.259916 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.264035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.264089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.264116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.264147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.264168 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.277007 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.293293 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.312831 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.337837 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.358858 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.366076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.366108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.366119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.366135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.366146 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.376037 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.394508 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.417630 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.428952 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.440889 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.456244 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.470122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.470173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.470183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.470200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.470211 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.471774 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.485417 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.497252 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:36Z\\\",\\\"message\\\":\\\"2026-01-27T06:46:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708\\\\n2026-01-27T06:46:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708 to /host/opt/cni/bin/\\\\n2026-01-27T06:46:51Z [verbose] multus-daemon started\\\\n2026-01-27T06:46:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:47:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.527327 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.539458 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.548881 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:38Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.573023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.573080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.573094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.573115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.573131 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.676650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.676721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.676733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.676756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.676769 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.731408 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:30:33.737467531 +0000 UTC Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.746851 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.746853 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:38 crc kubenswrapper[4796]: E0127 06:47:38.746989 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:38 crc kubenswrapper[4796]: E0127 06:47:38.747259 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.779588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.779650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.779677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.779708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.779731 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.883765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.883821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.883839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.883884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.883903 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.986828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.986901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.986923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.986952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:38 crc kubenswrapper[4796]: I0127 06:47:38.986969 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:38Z","lastTransitionTime":"2026-01-27T06:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.089443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.089480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.089490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.089504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.089513 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.192953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.193501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.193735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.193928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.194147 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.298107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.298167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.298186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.298210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.298227 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.402228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.402287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.402304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.402327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.402344 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.506177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.506979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.507162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.507352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.507803 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.611222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.611287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.611304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.611327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.611342 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.714730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.714802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.714825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.714854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.714874 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.731940 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:38:24.032504832 +0000 UTC Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.746716 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:39 crc kubenswrapper[4796]: E0127 06:47:39.747003 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.747046 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:39 crc kubenswrapper[4796]: E0127 06:47:39.747456 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.759251 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.818349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.818403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.818421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.818446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.818468 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.921908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.922118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.922243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.922401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:39 crc kubenswrapper[4796]: I0127 06:47:39.922501 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:39Z","lastTransitionTime":"2026-01-27T06:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.025590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.025663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.025688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.025723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.025747 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.128904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.128976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.128998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.129026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.129047 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.231719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.231789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.231801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.231819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.231833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.335460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.335571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.335598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.335631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.335656 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.439136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.439200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.439235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.439265 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.439287 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.543009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.543075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.543098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.543131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.543157 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.646250 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.646649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.646808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.646946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.647063 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.733157 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:39:05.667553582 +0000 UTC Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.746661 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.746683 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:40 crc kubenswrapper[4796]: E0127 06:47:40.747182 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:40 crc kubenswrapper[4796]: E0127 06:47:40.747348 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.754309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.755573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.755696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.755819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.755914 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.772923 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.787449 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.806641 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.824830 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.844965 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:36Z\\\",\\\"message\\\":\\\"2026-01-27T06:46:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708\\\\n2026-01-27T06:46:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708 to /host/opt/cni/bin/\\\\n2026-01-27T06:46:51Z [verbose] multus-daemon started\\\\n2026-01-27T06:46:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:47:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.859698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.859732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.859743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.859760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.859772 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.866842 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.882104 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.896747 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.913328 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.930036 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.942221 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.961393 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.964890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.965028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.965098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.965160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.965221 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:40Z","lastTransitionTime":"2026-01-27T06:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.973126 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d74126aa-8355-44f9-a22e-180e90b39c56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6eebf6f0bf5d7eb7abb8cd624d134d69945b319d695580cf4f540ac6870a1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.985059 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:40 crc kubenswrapper[4796]: I0127 06:47:40.999419 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:40Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.014148 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.026224 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.041058 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.056969 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:41Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.068601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.068668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.068681 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.068708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.068719 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.171676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.171724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.171736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.171755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.171767 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.274899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.274957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.274970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.274988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.275001 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.377823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.377909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.377933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.377964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.377983 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.481302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.481918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.482792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.483002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.483173 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.592673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.593043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.593216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.593359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.593671 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.696717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.696788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.696811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.696840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.696861 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.734385 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 05:31:14.439630766 +0000 UTC Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.746928 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:41 crc kubenswrapper[4796]: E0127 06:47:41.747451 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.748394 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:41 crc kubenswrapper[4796]: E0127 06:47:41.748948 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.801193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.801453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.801576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.801677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.801754 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.904941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.905000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.905016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.905041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:41 crc kubenswrapper[4796]: I0127 06:47:41.905057 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:41Z","lastTransitionTime":"2026-01-27T06:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.008022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.008066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.008078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.008095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.008106 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.110840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.110900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.110921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.110947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.110964 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.214210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.214254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.214294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.214334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.214348 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.317774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.317852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.317876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.317908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.317926 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.421325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.421939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.422358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.422715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.422860 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.525952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.526005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.526023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.526046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.526065 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.628671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.628733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.628757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.628788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.628809 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.732050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.732126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.732150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.732182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.732200 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.735243 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 13:01:10.211812629 +0000 UTC Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.746927 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:42 crc kubenswrapper[4796]: E0127 06:47:42.747257 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.747331 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:42 crc kubenswrapper[4796]: E0127 06:47:42.747848 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.748167 4796 scope.go:117] "RemoveContainer" containerID="a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.835574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.835627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.835643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.835663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.835677 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.938960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.939002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.939015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.939031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:42 crc kubenswrapper[4796]: I0127 06:47:42.939044 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:42Z","lastTransitionTime":"2026-01-27T06:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.041755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.041821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.041837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.041862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.041876 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.144958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.144985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.144993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.145007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.145024 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.248043 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.248103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.248121 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.248147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.248169 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.351485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.351601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.351613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.351631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.351643 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.454688 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.454728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.454739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.454755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.454767 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.557102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.557144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.557156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.557172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.557188 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.659335 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.659410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.659428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.659454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.659471 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.735677 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:03:38.384748342 +0000 UTC Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.744082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.744116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.744126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.744141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.744155 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.746163 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:43 crc kubenswrapper[4796]: E0127 06:47:43.746334 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.746441 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:43 crc kubenswrapper[4796]: E0127 06:47:43.746654 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:43 crc kubenswrapper[4796]: E0127 06:47:43.766766 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.770600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.770622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.770631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.770644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.770652 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: E0127 06:47:43.781239 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.784069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.784110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.784126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.784165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.784180 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: E0127 06:47:43.798870 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.802001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.802038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.802054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.802073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.802088 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: E0127 06:47:43.818954 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.823300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.823345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.823362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.823382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.823398 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: E0127 06:47:43.840528 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:43Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:43 crc kubenswrapper[4796]: E0127 06:47:43.840651 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.843031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.843052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.843060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.843074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.843083 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.945154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.945396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.945472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.945560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:43 crc kubenswrapper[4796]: I0127 06:47:43.945627 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:43Z","lastTransitionTime":"2026-01-27T06:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.047590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.047633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.047645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.047665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.047677 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.150884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.150957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.150981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.151011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.151029 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.253906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.253982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.254003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.254023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.254036 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.261741 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/2.log" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.265720 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.266356 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.291890 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.314431 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.331611 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.347496 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.356997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.357126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.357229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.357384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.357481 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.362489 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.376219 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d74126aa-8355-44f9-a22e-180e90b39c56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6eebf6f0bf5d7eb7abb8cd624d134d69945b319d695580cf4f540ac6870a1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.391978 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.409912 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.427972 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.442908 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.459823 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.460081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.460115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.460129 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.460149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.460163 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.475371 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.497191 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.517458 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.533019 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.547225 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.562829 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.562931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.562953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.562984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.563008 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.566837 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:36Z\\\",\\\"message\\\":\\\"2026-01-27T06:46:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708\\\\n2026-01-27T06:46:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708 to /host/opt/cni/bin/\\\\n2026-01-27T06:46:51Z [verbose] multus-daemon started\\\\n2026-01-27T06:46:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:47:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.593583 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.608966 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:44Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.666130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.666174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.666186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.666207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.666224 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.736805 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:02:01.467264885 +0000 UTC Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.746445 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.746523 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:44 crc kubenswrapper[4796]: E0127 06:47:44.746660 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:44 crc kubenswrapper[4796]: E0127 06:47:44.746887 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.768901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.768956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.768973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.768996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.769014 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.871717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.871783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.871802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.871826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.871845 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.975662 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.975724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.975742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.975768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:44 crc kubenswrapper[4796]: I0127 06:47:44.975786 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:44Z","lastTransitionTime":"2026-01-27T06:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.083661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.083807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.083833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.083864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.083889 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.187189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.187268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.187291 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.187319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.187338 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.272222 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/3.log" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.273494 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/2.log" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.277658 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817" exitCode=1 Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.277716 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.277767 4796 scope.go:117] "RemoveContainer" containerID="a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.279055 4796 scope.go:117] "RemoveContainer" containerID="24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817" Jan 27 06:47:45 crc kubenswrapper[4796]: E0127 06:47:45.279378 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.293428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.293460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.293472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.293486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.293496 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.308521 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.323701 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.339568 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.357648 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.371882 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d74126aa-8355-44f9-a22e-180e90b39c56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6eebf6f0bf5d7eb7abb8cd624d134d69945b319d695580cf4f540ac6870a1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.392378 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.396114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.396181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.396202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.396232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.396256 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.414328 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.435475 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.448774 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.461392 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.472938 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.487921 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.504475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.504586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.504611 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.504640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.504668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.506464 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.522129 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.537796 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.552395 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:36Z\\\",\\\"message\\\":\\\"2026-01-27T06:46:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708\\\\n2026-01-27T06:46:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708 to /host/opt/cni/bin/\\\\n2026-01-27T06:46:51Z [verbose] multus-daemon started\\\\n2026-01-27T06:46:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:47:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.569998 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5b39a7217daeda9dd1be33c1c724ff10d594e518288b1d92b411ec31b6f516f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:14Z\\\",\\\"message\\\":\\\"efault, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"20da2226-531c-4179-9810-aa4026995ca3\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/certified-operators_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0127 06:47:14.602027 6475 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0127 06:47:14.602184 6475 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:44Z\\\",\\\"message\\\":\\\"rt:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.176\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:1936, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 06:47:44.311929 6908 services_controller.go:444] Built service openshift-ingress/router-internal-default LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 06:47:44.311746 6908 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 06:47:44.311565 6908 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.582741 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.594637 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:45Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.607508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.607579 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.607591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.607608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.607621 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.710796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.711198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.711277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.711386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.711519 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.737220 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 01:53:16.288840355 +0000 UTC Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.746734 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.746734 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:45 crc kubenswrapper[4796]: E0127 06:47:45.746997 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:45 crc kubenswrapper[4796]: E0127 06:47:45.747148 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.815439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.815482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.815496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.815517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.815530 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.918634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.919016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.919178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.919365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:45 crc kubenswrapper[4796]: I0127 06:47:45.919521 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:45Z","lastTransitionTime":"2026-01-27T06:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.026759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.026834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.026859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.026892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.026914 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.130177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.130240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.130259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.130283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.130302 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.233205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.233282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.233302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.233330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.233352 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.285297 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/3.log" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.290130 4796 scope.go:117] "RemoveContainer" containerID="24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817" Jan 27 06:47:46 crc kubenswrapper[4796]: E0127 06:47:46.290418 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.310705 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.328221 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.336035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.336094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.336106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.336131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.336144 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.345255 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d74126aa-8355-44f9-a22e-180e90b39c56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6eebf6f0bf5d7eb7abb8cd624d134d69945b319d695580cf4f540ac6870a1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.366313 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.387372 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.403964 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.418588 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.435425 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.439421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.439482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.439498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.439521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.439562 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.453671 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.466269 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.483987 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.498248 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.512054 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.526195 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:36Z\\\",\\\"message\\\":\\\"2026-01-27T06:46:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708\\\\n2026-01-27T06:46:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708 to /host/opt/cni/bin/\\\\n2026-01-27T06:46:51Z [verbose] multus-daemon started\\\\n2026-01-27T06:46:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:47:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.542442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.542509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.542556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.542585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.542605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.553086 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:44Z\\\",\\\"message\\\":\\\"rt:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.176\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:1936, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 06:47:44.311929 6908 services_controller.go:444] Built service openshift-ingress/router-internal-default LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 06:47:44.311746 6908 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 06:47:44.311565 6908 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.564843 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.589004 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.606175 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.618666 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:46Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.645651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.645749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.645777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.645813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.645848 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.738665 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 18:03:32.148330802 +0000 UTC Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.747137 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.747189 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:46 crc kubenswrapper[4796]: E0127 06:47:46.747920 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:46 crc kubenswrapper[4796]: E0127 06:47:46.747810 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.749746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.749938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.750067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.750209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.750324 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.860701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.860771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.860789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.860816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.860833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.964094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.964166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.964175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.964192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:46 crc kubenswrapper[4796]: I0127 06:47:46.964202 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:46Z","lastTransitionTime":"2026-01-27T06:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.067594 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.067664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.067684 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.067710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.067728 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.171130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.171212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.171231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.171259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.171278 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.274004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.274053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.274069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.274090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.274104 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.376814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.376886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.376904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.376928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.376945 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.479943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.479993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.480009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.480031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.480048 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.583424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.583516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.583568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.583602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.583623 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.686492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.686676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.686702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.686733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.686759 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.740753 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:42:01.174809404 +0000 UTC Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.747085 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.747110 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:47 crc kubenswrapper[4796]: E0127 06:47:47.747277 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:47 crc kubenswrapper[4796]: E0127 06:47:47.747395 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.789229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.789380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.789414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.789441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.789464 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.892154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.892223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.892240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.892262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.892281 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.996075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.996178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.996195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.996223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:47 crc kubenswrapper[4796]: I0127 06:47:47.996243 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:47Z","lastTransitionTime":"2026-01-27T06:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.099498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.099613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.099637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.099666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.099688 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.203059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.203152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.203173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.203229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.203248 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.306578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.306672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.306693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.306725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.306745 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.410378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.410446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.410465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.410493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.410508 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.514293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.514349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.514362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.514383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.514396 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.618497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.618630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.618651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.618682 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.618704 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.722042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.722109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.722127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.722153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.722174 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.741244 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:09:53.45841386 +0000 UTC Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.746831 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.746831 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:48 crc kubenswrapper[4796]: E0127 06:47:48.747009 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:48 crc kubenswrapper[4796]: E0127 06:47:48.747265 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.826353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.826431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.826450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.826487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.826513 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.929985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.930056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.930070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.930094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:48 crc kubenswrapper[4796]: I0127 06:47:48.930115 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:48Z","lastTransitionTime":"2026-01-27T06:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.033063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.033122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.033142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.033169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.033186 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.136259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.136313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.136327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.136352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.136365 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.239250 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.239321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.239341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.239366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.239384 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.341910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.341957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.341972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.342036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.342048 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.445068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.445127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.445138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.445154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.445167 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.460439 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.460650 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.460618274 +0000 UTC m=+154.567585631 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.461936 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.462209 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.462672 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.462795 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.46276567 +0000 UTC m=+154.569733027 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.463293 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.463388 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.463369495 +0000 UTC m=+154.570336862 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.548911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.548996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.549031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.549065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.549087 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.563811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.563895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.564015 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.564049 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.564068 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.564151 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.564125253 +0000 UTC m=+154.671092610 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.564164 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.564197 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.564220 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.564325 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.564279367 +0000 UTC m=+154.671246734 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.652842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.652906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.652929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.652960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.652983 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.741810 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 01:17:20.247203871 +0000 UTC Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.747163 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.747163 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.747351 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:49 crc kubenswrapper[4796]: E0127 06:47:49.747467 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.755884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.755943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.755960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.755980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.755996 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.859756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.859816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.859832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.859856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.859872 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.962486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.962519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.962553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.962572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:49 crc kubenswrapper[4796]: I0127 06:47:49.962583 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:49Z","lastTransitionTime":"2026-01-27T06:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.065686 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.065790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.065816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.065847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.065869 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.169762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.169815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.169828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.169849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.169861 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.273310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.273371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.273381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.273402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.273415 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.376991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.377069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.377094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.377161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.377182 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.480228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.480292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.480329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.480362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.480384 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.583713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.583783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.583804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.583832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.583854 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.686437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.686509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.686524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.686573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.686590 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.742193 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:11:15.799535527 +0000 UTC Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.746718 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.746759 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:50 crc kubenswrapper[4796]: E0127 06:47:50.746956 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:50 crc kubenswrapper[4796]: E0127 06:47:50.747087 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.775514 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.790367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.790446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.790469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.790498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.790518 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.809409 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d74126aa-8355-44f9-a22e-180e90b39c56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6eebf6f0bf5d7eb7abb8cd624d134d69945b319d695580cf4f540ac6870a1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.831860 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.848665 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.868311 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.878830 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zhtz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"61a29cf2-64f3-4655-a2fa-06b269c644ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://372dffb4f29750115dda48ed80597b347e7021ac6dcb2c4161ee2f290f27024f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdhzj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zhtz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.894343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.894417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.894436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.894468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.894492 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.895049 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"84d7512b-555d-440a-b817-deb8ba12f61d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec0d72cc9d4158b10455650c8b8f916e9c314616c0cf92b4e1f2b966fa650e2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7djgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfqgm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.905895 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7c8820-8f4a-4cb8-a949-cbfa4e8efea7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://948d327df2d50b17a182fed3cb301b1396229844a41a5270ea33d7c5e9f95db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a6b5f1a21e3d2b93baf990f32b73b024a048bffefe4304ff435a2bd428b099b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc39fbcef1d044b8d23138e48425ad33e0650e222fb2131f45b59ca7d954820\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.920034 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"523d7c54-e525-4fef-8de8-b3bff6b70d8e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3321234c0c20f923d830de9c484b2999c545617b190e9d71878ac27415be5e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50e924216c3acf596b7bf129c7dcc1a1210574ca6448168c3b51656f6b8878fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23cc4bd0e2ff039db2e85ee32fb5858d59e45bad332f863e96bbe6a2425866a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54666c18024ebd01ba90af789cb80d73749334fc99727c52b48091d730e5f035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://73914eaa79ae2af33436d28dd95c7d4b05925e055b2d771e793263aaac2ee58b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2af40e5afdf668e42c32bab00326a4eb73331659c3ca5dd0c205d69177dcc87f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4185fbbd732a4fe7101424095b04bfb6604bdbd54f8286e4c9e4db0b21b53fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6czc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9j4qm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.938079 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41c97fb9-7f88-4adf-b9e5-a35ca143adad\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"message\\\":\\\"le observer\\\\nW0127 06:46:45.293362 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0127 06:46:45.293594 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0127 06:46:45.294370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1781671479/tls.crt::/tmp/serving-cert-1781671479/tls.key\\\\\\\"\\\\nI0127 06:46:45.911051 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0127 06:46:45.919088 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0127 06:46:45.919115 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0127 06:46:45.919139 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0127 06:46:45.919144 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0127 06:46:45.926789 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0127 06:46:45.926815 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0127 06:46:45.926824 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0127 06:46:45.926827 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0127 06:46:45.926830 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0127 06:46:45.926832 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0127 06:46:45.926998 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0127 06:46:45.928911 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.954744 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5dbc5d77a0e5289b72b8085e996880defc445db9f8c42176a187beaab8f1cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.970519 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be4f327225657bb36e0f62b5b37a56fc800bff73f1362b4677e8cea9e4480a36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7e93662258b03f2e8e6cd111208474a5a4199833ad4a375646d465b2a87fcab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.986780 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-46ql2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3555bc2-e335-4479-8b6f-8b5970b27a25\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:36Z\\\",\\\"message\\\":\\\"2026-01-27T06:46:51+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708\\\\n2026-01-27T06:46:51+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_8b8d5c2a-d08d-4f37-9c93-e0c8ee74c708 to /host/opt/cni/bin/\\\\n2026-01-27T06:46:51Z [verbose] multus-daemon started\\\\n2026-01-27T06:46:51Z [verbose] Readiness Indicator file check\\\\n2026-01-27T06:47:36Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h7ckp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-46ql2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:50Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.997761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.997834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.997857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.997892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:50 crc kubenswrapper[4796]: I0127 06:47:50.997918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:50Z","lastTransitionTime":"2026-01-27T06:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.008820 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1fb58d6-d9a4-4095-be46-a544216963f7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-27T06:47:44Z\\\",\\\"message\\\":\\\"rt:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.176\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:1936, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0127 06:47:44.311929 6908 services_controller.go:444] Built service openshift-ingress/router-internal-default LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0127 06:47:44.311746 6908 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0127 06:47:44.311565 6908 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-27T06:47:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fskkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xqmc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.022327 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kx5rc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6932c20-41d2-487b-90b4-1e3c96cb17fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52d0c40d9e59c7e62b7479b955967849178079a9bb2ac73ac987bd4eddc11183\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cnlxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:49Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kx5rc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.036425 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gvx56" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mjbl8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:02Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gvx56\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.064139 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91a38b2a-1669-414b-b2de-bd9f7d656db4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e496a7cea3a6ba7b8e4e60bd02f9cafc56cc4cf1049524b15749352b71b1551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c40ca26c279d35c5369a97a8926c8c76b5773da64438b08a0fd721604f5cf1a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51f23c6ba8596716e9d0b3001ed8d1494f186950ddb86e3187e98b5a9c5940c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab5e7ccdedeb2a9cc67c1eb6014f0a327f009baace0decc678fd319c7f02e63d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8be41d70648c6527439ed95843ec087f65506642438fab67094697952d0c69ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://275db64da105c4c6bedb2bc3e5b71bb0ebed367d309be1816657f4945958e59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1123db3b45ce6bacf153b4cd87e9825063e957dd634123abf73e10a2dff20ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16fb167cb247f5f5251b41a37744a1b3a114bd42db68b74b06f5edfccc2aca8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.089184 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:48Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://75c8f9bf62140eb4e54fc54bfef65710f511027fb29cc70e83d1f2516bb5b222\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.100014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.100040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.100052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.100070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.100081 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.104860 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f76abf7f-03d4-496f-b7bd-1bc63e0425e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c19e9f9a416c0e4e8723cfe5ccf410490e6599f0def64f7e8e1beab5d3ae8896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c76f9100193fe1dc49d79019ff1953e3504677a9ca67ea5d80b6ec1d79337bdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j57jq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:47:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-gddlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:51Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.203283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.203313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.203323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.203339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.203349 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.306006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.306132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.306157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.306203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.306241 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.408828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.408881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.408907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.408930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.408945 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.512380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.512422 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.512436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.512464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.512488 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.615918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.615986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.615999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.616019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.616030 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.719394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.719452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.719469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.719493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.719512 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.742707 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 16:48:16.378339337 +0000 UTC Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.747164 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.747254 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:51 crc kubenswrapper[4796]: E0127 06:47:51.747454 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:51 crc kubenswrapper[4796]: E0127 06:47:51.747625 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.822214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.822275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.822288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.822311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.822327 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.926000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.926046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.926058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.926078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:51 crc kubenswrapper[4796]: I0127 06:47:51.926090 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:51Z","lastTransitionTime":"2026-01-27T06:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.028928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.028992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.029005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.029026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.029042 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.132510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.132621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.132644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.132675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.132695 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.236268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.236343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.236375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.236409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.236434 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.339703 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.339783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.339807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.339843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.339869 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.444071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.444151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.444178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.444210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.444231 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.548520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.548757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.548776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.548856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.548929 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.652894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.652988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.653003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.653029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.653046 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.743115 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 10:39:59.24381455 +0000 UTC Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.746391 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:52 crc kubenswrapper[4796]: E0127 06:47:52.746578 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.746625 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:52 crc kubenswrapper[4796]: E0127 06:47:52.746792 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.755505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.755558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.755569 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.755585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.755597 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.859920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.859963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.859974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.859991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.860002 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.962740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.962818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.962844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.962873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:52 crc kubenswrapper[4796]: I0127 06:47:52.962900 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:52Z","lastTransitionTime":"2026-01-27T06:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.067249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.067324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.067347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.067389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.067413 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.171752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.171835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.171851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.171879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.171896 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.277746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.277828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.277852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.277884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.277912 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.381852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.381935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.381950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.381972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.381987 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.484882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.484954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.484972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.484998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.485016 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.587799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.587894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.587922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.587952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.587977 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.691150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.691234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.691248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.691273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.691286 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.743310 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:03:15.218917612 +0000 UTC Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.746716 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.746745 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:53 crc kubenswrapper[4796]: E0127 06:47:53.746923 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:53 crc kubenswrapper[4796]: E0127 06:47:53.747076 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.794607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.794680 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.794694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.794719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.794734 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.897904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.897980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.897998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.898025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.898047 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.946501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.946586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.946596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.946612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.946622 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: E0127 06:47:53.966206 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.969840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.969903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.969922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.969947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.969965 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:53 crc kubenswrapper[4796]: E0127 06:47:53.990643 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:53Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.994485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.994523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.994550 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.994567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:53 crc kubenswrapper[4796]: I0127 06:47:53.994580 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:53Z","lastTransitionTime":"2026-01-27T06:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: E0127 06:47:54.010876 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.015051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.015125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.015145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.015171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.015191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: E0127 06:47:54.033432 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.037379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.037435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.037454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.037480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.037497 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: E0127 06:47:54.054901 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab5d23f4-0a1a-4348-a4ed-cd82856490af\\\",\\\"systemUUID\\\":\\\"ea2a725c-47df-4291-8c97-fc5620e930c7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:47:54Z is after 2025-08-24T17:21:41Z" Jan 27 06:47:54 crc kubenswrapper[4796]: E0127 06:47:54.055156 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.057031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.057068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.057082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.057100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.057113 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.160716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.160779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.160796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.160820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.160838 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.264328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.264381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.264392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.264409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.264421 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.368109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.368160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.368233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.368254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.368266 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.471400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.471499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.471518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.471573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.471593 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.573884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.574041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.574062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.574088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.574108 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.676897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.676956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.676973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.676997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.677017 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.743946 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:35:42.011409132 +0000 UTC Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.746428 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.746478 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:54 crc kubenswrapper[4796]: E0127 06:47:54.746724 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:54 crc kubenswrapper[4796]: E0127 06:47:54.746937 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.780276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.780356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.780374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.780405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.780427 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.883433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.883487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.883503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.883578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.883606 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.986169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.986258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.986276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.986303 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:54 crc kubenswrapper[4796]: I0127 06:47:54.986320 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:54Z","lastTransitionTime":"2026-01-27T06:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.089898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.089957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.089980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.090011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.090035 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.194497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.195009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.195032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.195059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.195078 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.298342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.298387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.298403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.298425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.298441 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.401834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.401910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.401929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.401956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.401973 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.504263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.504335 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.504361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.504390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.504412 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.607833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.607894 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.607911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.607934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.607946 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.711032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.711075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.711085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.711100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.711112 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.744911 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:01:40.867774013 +0000 UTC Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.746177 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.746351 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:55 crc kubenswrapper[4796]: E0127 06:47:55.746710 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:55 crc kubenswrapper[4796]: E0127 06:47:55.746965 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.814745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.814828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.814849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.814909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.814930 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.918709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.918794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.918820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.918851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:55 crc kubenswrapper[4796]: I0127 06:47:55.918874 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:55Z","lastTransitionTime":"2026-01-27T06:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.021610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.021657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.021670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.021690 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.021705 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.125263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.125301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.125310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.125324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.125335 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.227983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.228016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.228041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.228054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.228063 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.330125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.330161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.330173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.330188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.330197 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.432976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.433036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.433050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.433063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.433072 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.535917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.535972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.535989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.536017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.536034 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.639011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.639057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.639073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.639097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.639114 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.742154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.742199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.742214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.742235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.742252 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.759109 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:05:17.822676591 +0000 UTC Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.759425 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.759497 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:56 crc kubenswrapper[4796]: E0127 06:47:56.759666 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:56 crc kubenswrapper[4796]: E0127 06:47:56.759810 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.845463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.845592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.845620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.845651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.845676 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.949019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.949071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.949089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.949110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:56 crc kubenswrapper[4796]: I0127 06:47:56.949124 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:56Z","lastTransitionTime":"2026-01-27T06:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.052376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.052438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.052453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.052473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.052487 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.156297 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.156359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.156371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.156391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.156403 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.258716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.258780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.258797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.258822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.258839 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.361564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.361609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.361619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.361635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.361648 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.464572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.464799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.464819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.464842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.464859 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.567826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.567898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.567913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.567939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.567956 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.670953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.671376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.671514 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.671753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.671928 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.746688 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.746716 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:57 crc kubenswrapper[4796]: E0127 06:47:57.746993 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:57 crc kubenswrapper[4796]: E0127 06:47:57.747113 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.748193 4796 scope.go:117] "RemoveContainer" containerID="24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817" Jan 27 06:47:57 crc kubenswrapper[4796]: E0127 06:47:57.748492 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.759915 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:25:12.320048785 +0000 UTC Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.774853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.774941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.774971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.775003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.775027 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.878460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.878586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.878617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.878651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.878674 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.981188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.981222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.981233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.981249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:57 crc kubenswrapper[4796]: I0127 06:47:57.981261 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:57Z","lastTransitionTime":"2026-01-27T06:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.083997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.084053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.084070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.084095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.084115 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.187666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.187728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.187737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.187757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.187768 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.291910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.291967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.291976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.291995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.292007 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.395164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.395227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.395238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.395258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.395270 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.497512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.497578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.497589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.497605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.497620 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.601025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.601092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.601104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.601124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.601141 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.704144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.704225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.704246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.704273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.704293 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.746878 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.747007 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:47:58 crc kubenswrapper[4796]: E0127 06:47:58.747067 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:47:58 crc kubenswrapper[4796]: E0127 06:47:58.747267 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.761776 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:03:26.35599389 +0000 UTC Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.807167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.807229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.807252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.807280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.807303 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.910776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.910840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.910857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.910882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:58 crc kubenswrapper[4796]: I0127 06:47:58.910900 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:58Z","lastTransitionTime":"2026-01-27T06:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.014451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.014499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.014512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.014530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.014563 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.117893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.117952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.117969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.117994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.118010 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.221768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.221857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.221881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.221915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.221939 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.325818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.325916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.325937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.325970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.325992 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.429345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.429402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.429421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.429444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.429461 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.533050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.533132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.533147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.533169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.533183 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.637341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.637405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.637426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.637449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.637469 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.741022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.741108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.741127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.741152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.741178 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.746295 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.746377 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:47:59 crc kubenswrapper[4796]: E0127 06:47:59.746504 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:47:59 crc kubenswrapper[4796]: E0127 06:47:59.746645 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.762747 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:00:52.91025011 +0000 UTC Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.845817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.845906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.845927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.845952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.846032 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.949527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.949598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.949610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.949626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:47:59 crc kubenswrapper[4796]: I0127 06:47:59.949662 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:47:59Z","lastTransitionTime":"2026-01-27T06:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.052734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.052815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.052840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.052873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.052892 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.156585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.156731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.156757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.156789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.156807 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.267733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.268313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.268330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.268361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.268377 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.371358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.371439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.371484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.371511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.371569 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.474978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.475065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.475084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.475112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.475132 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.578354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.578423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.578439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.578466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.578486 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.681577 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.681625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.681634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.681651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.681661 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.746627 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.746675 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:00 crc kubenswrapper[4796]: E0127 06:48:00.747477 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:00 crc kubenswrapper[4796]: E0127 06:48:00.747630 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.763366 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:23:35.748711146 +0000 UTC Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.766752 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5411bac6-98fc-4818-b8d8-719ac8a4e77d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:47:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4703bc30d7f8c3e6340ce7f818985b99667f3703b1399e8542341618f423ce05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6a258372b9862ec4e513ffb93a28fb346a67b05941486a4be0275b94cea2a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef72b83b73d0b802d6dd4dfd087f388d7a1e01621dd126b13729831545461d5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://859ce6407959a0931a232e38824568585b947803cb4437a0d0e5101c8b19a36b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.783202 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d74126aa-8355-44f9-a22e-180e90b39c56\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6eebf6f0bf5d7eb7abb8cd624d134d69945b319d695580cf4f540ac6870a1d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-27T06:46:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaa72f37c854df50cd7fa5fbe526f1fba2f70106de6ce6c178dd711b498b7466\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-27T06:46:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-27T06:46:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-27T06:46:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.784852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.784961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.785164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.785324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.785589 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.802958 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-27T06:46:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-27T06:48:00Z is after 2025-08-24T17:21:41Z" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.878571 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podStartSLOduration=73.878519708 podStartE2EDuration="1m13.878519708s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:00.878296953 +0000 UTC m=+101.985264290" watchObservedRunningTime="2026-01-27 06:48:00.878519708 +0000 UTC m=+101.985487045" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.878887 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zhtz2" podStartSLOduration=74.878878027 podStartE2EDuration="1m14.878878027s" podCreationTimestamp="2026-01-27 06:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:00.863351602 +0000 UTC m=+101.970318949" watchObservedRunningTime="2026-01-27 06:48:00.878878027 +0000 UTC m=+101.985845364" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.888394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.888430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.888440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.888453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.888463 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.919515 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.919492882 podStartE2EDuration="1m9.919492882s" podCreationTimestamp="2026-01-27 06:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:00.895659795 +0000 UTC m=+102.002627142" watchObservedRunningTime="2026-01-27 06:48:00.919492882 +0000 UTC m=+102.026460209" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.919924 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9j4qm" podStartSLOduration=73.919916964 podStartE2EDuration="1m13.919916964s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:00.919360599 +0000 UTC m=+102.026327956" watchObservedRunningTime="2026-01-27 06:48:00.919916964 +0000 UTC m=+102.026884291" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.962390 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=74.962369186 podStartE2EDuration="1m14.962369186s" podCreationTimestamp="2026-01-27 06:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:00.939926714 +0000 UTC m=+102.046894031" watchObservedRunningTime="2026-01-27 06:48:00.962369186 +0000 UTC m=+102.069336523" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.991373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.991403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.991413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.991427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:00 crc kubenswrapper[4796]: I0127 06:48:00.991437 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:00Z","lastTransitionTime":"2026-01-27T06:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.000581 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-46ql2" podStartSLOduration=74.000563309 podStartE2EDuration="1m14.000563309s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:01.000123508 +0000 UTC m=+102.107090865" watchObservedRunningTime="2026-01-27 06:48:01.000563309 +0000 UTC m=+102.107530646" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.044963 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kx5rc" podStartSLOduration=75.04494293 podStartE2EDuration="1m15.04494293s" podCreationTimestamp="2026-01-27 06:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:01.044508699 +0000 UTC m=+102.151476036" watchObservedRunningTime="2026-01-27 06:48:01.04494293 +0000 UTC m=+102.151910257" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.093029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.093098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.093108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.093124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.093136 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.100112 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=74.100094866 podStartE2EDuration="1m14.100094866s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:01.099689465 +0000 UTC m=+102.206656802" watchObservedRunningTime="2026-01-27 06:48:01.100094866 +0000 UTC m=+102.207062203" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.137439 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-gddlf" podStartSLOduration=74.137406027 podStartE2EDuration="1m14.137406027s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:01.136562665 +0000 UTC m=+102.243530012" watchObservedRunningTime="2026-01-27 06:48:01.137406027 +0000 UTC m=+102.244373364" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.196239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.196288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.196301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.196319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.196330 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.298842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.298906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.298938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.298971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.298994 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.402264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.402329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.402353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.402386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.402409 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.506952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.507018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.507029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.507050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.507064 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.610460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.610524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.610555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.610574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.610587 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.714209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.714259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.714290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.714312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.714325 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.746771 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.746809 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:01 crc kubenswrapper[4796]: E0127 06:48:01.746956 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:01 crc kubenswrapper[4796]: E0127 06:48:01.747374 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.764489 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:57:50.211619724 +0000 UTC Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.817230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.817266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.817273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.817286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.817295 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.920433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.920479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.920488 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.920505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:01 crc kubenswrapper[4796]: I0127 06:48:01.920562 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:01Z","lastTransitionTime":"2026-01-27T06:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.024734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.024808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.024843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.024874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.024896 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.128163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.128244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.128256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.128281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.128293 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.231102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.231181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.231207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.231237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.231259 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.335143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.335219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.335244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.335276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.335300 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.438793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.438880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.438904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.438941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.438969 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.543290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.543364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.543377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.543405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.543418 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.646874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.646963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.646978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.646998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.647012 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.747083 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.747160 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:02 crc kubenswrapper[4796]: E0127 06:48:02.748314 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:02 crc kubenswrapper[4796]: E0127 06:48:02.748679 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.749651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.749720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.749744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.749770 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.749792 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.765298 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 01:21:49.616511796 +0000 UTC Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.853387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.853453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.853469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.853494 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.853514 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.957658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.957733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.957758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.957793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:02 crc kubenswrapper[4796]: I0127 06:48:02.957823 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:02Z","lastTransitionTime":"2026-01-27T06:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.061156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.061231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.061254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.061293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.061319 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.169078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.169158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.169191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.169234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.169255 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.273241 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.273298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.273316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.273344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.273367 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.377468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.377604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.377627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.377657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.377682 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.481973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.482032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.482051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.482075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.482092 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.585577 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.585635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.585653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.585676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.585694 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.689520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.689603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.689614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.689632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.689644 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.746382 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.746400 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:03 crc kubenswrapper[4796]: E0127 06:48:03.746643 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:03 crc kubenswrapper[4796]: E0127 06:48:03.746774 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.765510 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:12:17.1084402 +0000 UTC Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.793379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.793472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.793517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.793588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.793612 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.906914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.906990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.907006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.907035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:03 crc kubenswrapper[4796]: I0127 06:48:03.907052 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:03Z","lastTransitionTime":"2026-01-27T06:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.010296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.010330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.010338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.010350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.010359 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.113733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.113790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.113808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.113836 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.113856 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.217850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.217941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.217966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.218000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.218020 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.274858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.274934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.274955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.274984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.275006 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-27T06:48:04Z","lastTransitionTime":"2026-01-27T06:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.352035 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp"] Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.352678 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.355849 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.356818 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.357678 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.357740 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.406984 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.40695111 podStartE2EDuration="49.40695111s" podCreationTimestamp="2026-01-27 06:47:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:04.401454269 +0000 UTC m=+105.508421626" watchObservedRunningTime="2026-01-27 06:48:04.40695111 +0000 UTC m=+105.513918477" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.418100 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.418074403 podStartE2EDuration="25.418074403s" podCreationTimestamp="2026-01-27 06:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:04.415256391 +0000 UTC m=+105.522223758" watchObservedRunningTime="2026-01-27 06:48:04.418074403 +0000 UTC m=+105.525041740" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.450955 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.451051 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.451084 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.451112 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.451138 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.552783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.552857 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.552882 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.552899 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.552917 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.553082 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.553103 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.554154 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.561762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.572334 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e0de7ef-7d5e-448a-b15a-ad6903bfd32f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-gwclp\" (UID: \"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.683862 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.747216 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.747290 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:04 crc kubenswrapper[4796]: E0127 06:48:04.747362 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:04 crc kubenswrapper[4796]: E0127 06:48:04.747505 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.766290 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 21:38:33.517644887 +0000 UTC Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.766338 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 27 06:48:04 crc kubenswrapper[4796]: I0127 06:48:04.777011 4796 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 27 06:48:05 crc kubenswrapper[4796]: I0127 06:48:05.365510 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" event={"ID":"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f","Type":"ContainerStarted","Data":"efcbc63dc496e3a79da70393456dff098ff6810aa7537782aed8900ec1eb68f9"} Jan 27 06:48:05 crc kubenswrapper[4796]: I0127 06:48:05.365615 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" event={"ID":"1e0de7ef-7d5e-448a-b15a-ad6903bfd32f","Type":"ContainerStarted","Data":"f80e36d83e6b0720e81f57e26ac94f6bfaf2179c9d430deea31821d2bb3dc6e5"} Jan 27 06:48:05 crc kubenswrapper[4796]: I0127 06:48:05.397478 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-gwclp" podStartSLOduration=78.397447405 podStartE2EDuration="1m18.397447405s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:05.395955206 +0000 UTC m=+106.502922573" watchObservedRunningTime="2026-01-27 06:48:05.397447405 +0000 UTC m=+106.504414772" Jan 27 06:48:05 crc kubenswrapper[4796]: I0127 06:48:05.746646 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:05 crc kubenswrapper[4796]: E0127 06:48:05.746890 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:05 crc kubenswrapper[4796]: I0127 06:48:05.746646 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:05 crc kubenswrapper[4796]: E0127 06:48:05.747388 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:06 crc kubenswrapper[4796]: I0127 06:48:06.372707 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:06 crc kubenswrapper[4796]: E0127 06:48:06.372909 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:06 crc kubenswrapper[4796]: E0127 06:48:06.373002 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs podName:bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09 nodeName:}" failed. No retries permitted until 2026-01-27 06:49:10.372976188 +0000 UTC m=+171.479943545 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs") pod "network-metrics-daemon-gvx56" (UID: "bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 27 06:48:06 crc kubenswrapper[4796]: I0127 06:48:06.746611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:06 crc kubenswrapper[4796]: I0127 06:48:06.746990 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:06 crc kubenswrapper[4796]: E0127 06:48:06.747188 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:06 crc kubenswrapper[4796]: E0127 06:48:06.747329 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:07 crc kubenswrapper[4796]: I0127 06:48:07.746694 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:07 crc kubenswrapper[4796]: I0127 06:48:07.746706 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:07 crc kubenswrapper[4796]: E0127 06:48:07.746894 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:07 crc kubenswrapper[4796]: E0127 06:48:07.747028 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:08 crc kubenswrapper[4796]: I0127 06:48:08.746213 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:08 crc kubenswrapper[4796]: I0127 06:48:08.746733 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:08 crc kubenswrapper[4796]: E0127 06:48:08.747017 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:08 crc kubenswrapper[4796]: I0127 06:48:08.747317 4796 scope.go:117] "RemoveContainer" containerID="24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817" Jan 27 06:48:08 crc kubenswrapper[4796]: E0127 06:48:08.747581 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:08 crc kubenswrapper[4796]: E0127 06:48:08.747626 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" Jan 27 06:48:09 crc kubenswrapper[4796]: I0127 06:48:09.746708 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:09 crc kubenswrapper[4796]: I0127 06:48:09.746748 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:09 crc kubenswrapper[4796]: E0127 06:48:09.746911 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:09 crc kubenswrapper[4796]: E0127 06:48:09.747087 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:10 crc kubenswrapper[4796]: I0127 06:48:10.746168 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:10 crc kubenswrapper[4796]: I0127 06:48:10.746166 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:10 crc kubenswrapper[4796]: E0127 06:48:10.748279 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:10 crc kubenswrapper[4796]: E0127 06:48:10.748599 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:11 crc kubenswrapper[4796]: I0127 06:48:11.746558 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:11 crc kubenswrapper[4796]: I0127 06:48:11.746762 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:11 crc kubenswrapper[4796]: E0127 06:48:11.746859 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:11 crc kubenswrapper[4796]: E0127 06:48:11.747087 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:12 crc kubenswrapper[4796]: I0127 06:48:12.746379 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:12 crc kubenswrapper[4796]: I0127 06:48:12.746379 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:12 crc kubenswrapper[4796]: E0127 06:48:12.746582 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:12 crc kubenswrapper[4796]: E0127 06:48:12.746665 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:13 crc kubenswrapper[4796]: I0127 06:48:13.746307 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:13 crc kubenswrapper[4796]: E0127 06:48:13.746521 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:13 crc kubenswrapper[4796]: I0127 06:48:13.746724 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:13 crc kubenswrapper[4796]: E0127 06:48:13.746855 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:14 crc kubenswrapper[4796]: I0127 06:48:14.746215 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:14 crc kubenswrapper[4796]: E0127 06:48:14.746417 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:14 crc kubenswrapper[4796]: I0127 06:48:14.746499 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:14 crc kubenswrapper[4796]: E0127 06:48:14.746733 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:15 crc kubenswrapper[4796]: I0127 06:48:15.746254 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:15 crc kubenswrapper[4796]: I0127 06:48:15.746292 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:15 crc kubenswrapper[4796]: E0127 06:48:15.746384 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:15 crc kubenswrapper[4796]: E0127 06:48:15.746572 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:16 crc kubenswrapper[4796]: I0127 06:48:16.746291 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:16 crc kubenswrapper[4796]: I0127 06:48:16.746371 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:16 crc kubenswrapper[4796]: E0127 06:48:16.746460 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:16 crc kubenswrapper[4796]: E0127 06:48:16.746610 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:17 crc kubenswrapper[4796]: I0127 06:48:17.746902 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:17 crc kubenswrapper[4796]: I0127 06:48:17.746904 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:17 crc kubenswrapper[4796]: E0127 06:48:17.747069 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:17 crc kubenswrapper[4796]: E0127 06:48:17.747233 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:18 crc kubenswrapper[4796]: I0127 06:48:18.747601 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:18 crc kubenswrapper[4796]: I0127 06:48:18.747627 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:18 crc kubenswrapper[4796]: E0127 06:48:18.748250 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:18 crc kubenswrapper[4796]: E0127 06:48:18.748769 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:19 crc kubenswrapper[4796]: I0127 06:48:19.746865 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:19 crc kubenswrapper[4796]: I0127 06:48:19.746977 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:19 crc kubenswrapper[4796]: E0127 06:48:19.747098 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:19 crc kubenswrapper[4796]: E0127 06:48:19.747238 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:20 crc kubenswrapper[4796]: E0127 06:48:20.680220 4796 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 27 06:48:20 crc kubenswrapper[4796]: I0127 06:48:20.747144 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:20 crc kubenswrapper[4796]: I0127 06:48:20.747154 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:20 crc kubenswrapper[4796]: E0127 06:48:20.747926 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:20 crc kubenswrapper[4796]: E0127 06:48:20.748107 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:21 crc kubenswrapper[4796]: E0127 06:48:21.093702 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:48:21 crc kubenswrapper[4796]: I0127 06:48:21.746222 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:21 crc kubenswrapper[4796]: E0127 06:48:21.746358 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:21 crc kubenswrapper[4796]: I0127 06:48:21.746584 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:21 crc kubenswrapper[4796]: E0127 06:48:21.746661 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:22 crc kubenswrapper[4796]: I0127 06:48:22.746675 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:22 crc kubenswrapper[4796]: I0127 06:48:22.746690 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:22 crc kubenswrapper[4796]: E0127 06:48:22.747011 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:22 crc kubenswrapper[4796]: E0127 06:48:22.747194 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:23 crc kubenswrapper[4796]: I0127 06:48:23.438862 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/1.log" Jan 27 06:48:23 crc kubenswrapper[4796]: I0127 06:48:23.439475 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/0.log" Jan 27 06:48:23 crc kubenswrapper[4796]: I0127 06:48:23.439562 4796 generic.go:334] "Generic (PLEG): container finished" podID="b3555bc2-e335-4479-8b6f-8b5970b27a25" containerID="f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c" exitCode=1 Jan 27 06:48:23 crc kubenswrapper[4796]: I0127 06:48:23.439608 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46ql2" event={"ID":"b3555bc2-e335-4479-8b6f-8b5970b27a25","Type":"ContainerDied","Data":"f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c"} Jan 27 06:48:23 crc kubenswrapper[4796]: I0127 06:48:23.439652 4796 scope.go:117] "RemoveContainer" containerID="ac652200bded234891c0d1bf4c6025ae5250577a57b06e0baf746384866b7d5d" Jan 27 06:48:23 crc kubenswrapper[4796]: I0127 06:48:23.440299 4796 scope.go:117] "RemoveContainer" containerID="f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c" Jan 27 06:48:23 crc kubenswrapper[4796]: E0127 06:48:23.440634 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-46ql2_openshift-multus(b3555bc2-e335-4479-8b6f-8b5970b27a25)\"" pod="openshift-multus/multus-46ql2" podUID="b3555bc2-e335-4479-8b6f-8b5970b27a25" Jan 27 06:48:23 crc kubenswrapper[4796]: I0127 06:48:23.746466 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:23 crc kubenswrapper[4796]: I0127 06:48:23.746575 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:23 crc kubenswrapper[4796]: I0127 06:48:23.747303 4796 scope.go:117] "RemoveContainer" containerID="24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817" Jan 27 06:48:23 crc kubenswrapper[4796]: E0127 06:48:23.747401 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:23 crc kubenswrapper[4796]: E0127 06:48:23.747501 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xqmc4_openshift-ovn-kubernetes(a1fb58d6-d9a4-4095-be46-a544216963f7)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" Jan 27 06:48:23 crc kubenswrapper[4796]: E0127 06:48:23.747203 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:24 crc kubenswrapper[4796]: I0127 06:48:24.446057 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/1.log" Jan 27 06:48:24 crc kubenswrapper[4796]: I0127 06:48:24.746312 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:24 crc kubenswrapper[4796]: I0127 06:48:24.746591 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:24 crc kubenswrapper[4796]: E0127 06:48:24.746778 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:24 crc kubenswrapper[4796]: E0127 06:48:24.747141 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:25 crc kubenswrapper[4796]: I0127 06:48:25.746935 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:25 crc kubenswrapper[4796]: I0127 06:48:25.746993 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:25 crc kubenswrapper[4796]: E0127 06:48:25.747173 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:25 crc kubenswrapper[4796]: E0127 06:48:25.747292 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:26 crc kubenswrapper[4796]: E0127 06:48:26.095335 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:48:26 crc kubenswrapper[4796]: I0127 06:48:26.746991 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:26 crc kubenswrapper[4796]: E0127 06:48:26.747195 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:26 crc kubenswrapper[4796]: I0127 06:48:26.747282 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:26 crc kubenswrapper[4796]: E0127 06:48:26.747588 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:27 crc kubenswrapper[4796]: I0127 06:48:27.746896 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:27 crc kubenswrapper[4796]: E0127 06:48:27.747213 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:27 crc kubenswrapper[4796]: I0127 06:48:27.747678 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:27 crc kubenswrapper[4796]: E0127 06:48:27.747870 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:28 crc kubenswrapper[4796]: I0127 06:48:28.747166 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:28 crc kubenswrapper[4796]: I0127 06:48:28.747303 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:28 crc kubenswrapper[4796]: E0127 06:48:28.747480 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:28 crc kubenswrapper[4796]: E0127 06:48:28.747626 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:29 crc kubenswrapper[4796]: I0127 06:48:29.746519 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:29 crc kubenswrapper[4796]: I0127 06:48:29.746599 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:29 crc kubenswrapper[4796]: E0127 06:48:29.746821 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:29 crc kubenswrapper[4796]: E0127 06:48:29.746987 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:30 crc kubenswrapper[4796]: I0127 06:48:30.747287 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:30 crc kubenswrapper[4796]: I0127 06:48:30.747422 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:30 crc kubenswrapper[4796]: E0127 06:48:30.750035 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:30 crc kubenswrapper[4796]: E0127 06:48:30.750225 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:31 crc kubenswrapper[4796]: E0127 06:48:31.096311 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:48:31 crc kubenswrapper[4796]: I0127 06:48:31.746052 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:31 crc kubenswrapper[4796]: I0127 06:48:31.746125 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:31 crc kubenswrapper[4796]: E0127 06:48:31.746209 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:31 crc kubenswrapper[4796]: E0127 06:48:31.746385 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:32 crc kubenswrapper[4796]: I0127 06:48:32.746433 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:32 crc kubenswrapper[4796]: I0127 06:48:32.746483 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:32 crc kubenswrapper[4796]: E0127 06:48:32.746620 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:32 crc kubenswrapper[4796]: E0127 06:48:32.746743 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:33 crc kubenswrapper[4796]: I0127 06:48:33.747020 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:33 crc kubenswrapper[4796]: I0127 06:48:33.747165 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:33 crc kubenswrapper[4796]: E0127 06:48:33.747293 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:33 crc kubenswrapper[4796]: E0127 06:48:33.747449 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:33 crc kubenswrapper[4796]: I0127 06:48:33.747943 4796 scope.go:117] "RemoveContainer" containerID="f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c" Jan 27 06:48:34 crc kubenswrapper[4796]: I0127 06:48:34.488904 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/1.log" Jan 27 06:48:34 crc kubenswrapper[4796]: I0127 06:48:34.489770 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46ql2" event={"ID":"b3555bc2-e335-4479-8b6f-8b5970b27a25","Type":"ContainerStarted","Data":"c413a9ee0ad373bb16813255f639547353f408c44b80cb704801bfad74788d62"} Jan 27 06:48:34 crc kubenswrapper[4796]: I0127 06:48:34.746300 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:34 crc kubenswrapper[4796]: I0127 06:48:34.746347 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:34 crc kubenswrapper[4796]: E0127 06:48:34.746490 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:34 crc kubenswrapper[4796]: E0127 06:48:34.746654 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:35 crc kubenswrapper[4796]: I0127 06:48:35.746320 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:35 crc kubenswrapper[4796]: I0127 06:48:35.746421 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:35 crc kubenswrapper[4796]: E0127 06:48:35.746512 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:35 crc kubenswrapper[4796]: E0127 06:48:35.746636 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:36 crc kubenswrapper[4796]: E0127 06:48:36.098402 4796 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 27 06:48:36 crc kubenswrapper[4796]: I0127 06:48:36.746226 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:36 crc kubenswrapper[4796]: I0127 06:48:36.746336 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:36 crc kubenswrapper[4796]: E0127 06:48:36.746489 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:36 crc kubenswrapper[4796]: E0127 06:48:36.746759 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:36 crc kubenswrapper[4796]: I0127 06:48:36.748179 4796 scope.go:117] "RemoveContainer" containerID="24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817" Jan 27 06:48:37 crc kubenswrapper[4796]: I0127 06:48:37.501832 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/3.log" Jan 27 06:48:37 crc kubenswrapper[4796]: I0127 06:48:37.504721 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerStarted","Data":"b0114a658b7e6fb8e035558631620e865e26a2e4358c96ea56112e7debad7b53"} Jan 27 06:48:37 crc kubenswrapper[4796]: I0127 06:48:37.505192 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:48:37 crc kubenswrapper[4796]: I0127 06:48:37.537525 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podStartSLOduration=110.537498141 podStartE2EDuration="1m50.537498141s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:37.536086754 +0000 UTC m=+138.643054111" watchObservedRunningTime="2026-01-27 06:48:37.537498141 +0000 UTC m=+138.644465498" Jan 27 06:48:37 crc kubenswrapper[4796]: I0127 06:48:37.746359 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:37 crc kubenswrapper[4796]: E0127 06:48:37.746501 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:37 crc kubenswrapper[4796]: I0127 06:48:37.746652 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:37 crc kubenswrapper[4796]: E0127 06:48:37.746926 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:37 crc kubenswrapper[4796]: I0127 06:48:37.853518 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gvx56"] Jan 27 06:48:38 crc kubenswrapper[4796]: I0127 06:48:38.507938 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:38 crc kubenswrapper[4796]: E0127 06:48:38.508070 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:38 crc kubenswrapper[4796]: I0127 06:48:38.746700 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:38 crc kubenswrapper[4796]: I0127 06:48:38.746774 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:38 crc kubenswrapper[4796]: E0127 06:48:38.746928 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:38 crc kubenswrapper[4796]: E0127 06:48:38.747107 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:39 crc kubenswrapper[4796]: I0127 06:48:39.746207 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:39 crc kubenswrapper[4796]: E0127 06:48:39.746420 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 27 06:48:40 crc kubenswrapper[4796]: I0127 06:48:40.747252 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:40 crc kubenswrapper[4796]: I0127 06:48:40.747347 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:40 crc kubenswrapper[4796]: I0127 06:48:40.747390 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:40 crc kubenswrapper[4796]: E0127 06:48:40.749421 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 27 06:48:40 crc kubenswrapper[4796]: E0127 06:48:40.749596 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gvx56" podUID="bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09" Jan 27 06:48:40 crc kubenswrapper[4796]: E0127 06:48:40.749786 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 27 06:48:41 crc kubenswrapper[4796]: I0127 06:48:41.746819 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:41 crc kubenswrapper[4796]: I0127 06:48:41.750151 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 06:48:41 crc kubenswrapper[4796]: I0127 06:48:41.750505 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 06:48:42 crc kubenswrapper[4796]: I0127 06:48:42.746845 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:42 crc kubenswrapper[4796]: I0127 06:48:42.746979 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:48:42 crc kubenswrapper[4796]: I0127 06:48:42.747113 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:42 crc kubenswrapper[4796]: I0127 06:48:42.750473 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 06:48:42 crc kubenswrapper[4796]: I0127 06:48:42.750575 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 06:48:42 crc kubenswrapper[4796]: I0127 06:48:42.751178 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 06:48:42 crc kubenswrapper[4796]: I0127 06:48:42.752764 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 06:48:43 crc kubenswrapper[4796]: I0127 06:48:43.745603 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.091667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.159357 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.159986 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.161455 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.162109 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.163396 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t5kh8"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.164148 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.165007 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.165707 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.170092 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.170092 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.170293 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.172999 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.202391 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.203047 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.203217 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.203221 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.203427 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.203588 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.203650 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.203769 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.203606 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.203938 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.204785 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.204955 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zprl5"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.205622 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-859tc"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.205897 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6g4p8"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.206384 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.206682 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.206860 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.207125 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.208505 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.208698 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.208907 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.209069 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.209360 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.210132 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.210633 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.211391 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.211624 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.211744 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.211754 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.212480 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.212641 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.213316 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.213353 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.213416 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.213801 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.213812 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.213922 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.217978 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ltfqp"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.218089 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.218624 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.219098 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.219351 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.219656 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ltfqp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.219896 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.220919 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6qfnr"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.221593 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.222009 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.222024 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.222333 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.223046 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.227063 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.227613 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.228314 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.228788 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.228957 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.235764 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.236085 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.236275 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.236438 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.236608 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.236966 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.237099 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.237239 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qhgrn"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.237322 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.237650 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.237777 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.237051 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.237930 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.237687 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.238159 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.238622 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.238719 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.238967 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.239075 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.239236 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.239356 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.239450 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.239853 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.240241 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.241017 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.242118 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.243806 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.244416 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.245272 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.245434 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.247633 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.248365 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.249670 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.249819 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.250344 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.250697 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.252203 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.252288 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.252494 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.252672 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.253053 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.253414 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.258360 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.264323 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.265199 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.265469 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.265873 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.272605 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.272880 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.278407 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.279116 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.279700 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.280060 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.280258 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.280461 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.280643 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.280851 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.281000 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.281274 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wm5zr"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.281682 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.282113 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.282931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.283083 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.283134 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.283168 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.283993 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.284332 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.287086 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.288662 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.288854 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.289089 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.289332 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.289630 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.289858 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.289955 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.290547 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.294743 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.292938 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.294864 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.291468 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.294650 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.295187 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.294701 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.295417 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.294955 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.295586 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mr9hz"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.295740 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.295924 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.296021 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.296242 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.296242 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.296571 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m8qq4"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.297234 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.306569 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-r6xbk"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.307328 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-46nhj"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.307926 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.308412 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.308480 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.308851 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.308947 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.309800 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310343 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x28kc\" (UniqueName: \"kubernetes.io/projected/5d1e76c1-ba8e-41b1-a312-0670b06bc59f-kube-api-access-x28kc\") pod \"multus-admission-controller-857f4d67dd-6g4p8\" (UID: \"5d1e76c1-ba8e-41b1-a312-0670b06bc59f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310375 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9gmn\" (UniqueName: \"kubernetes.io/projected/677d7062-8915-454b-aff0-999ba539d454-kube-api-access-m9gmn\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310393 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb7x2\" (UniqueName: \"kubernetes.io/projected/0b9441e9-83fe-42de-92a2-71efc3550ac1-kube-api-access-qb7x2\") pod \"olm-operator-6b444d44fb-ts729\" (UID: \"0b9441e9-83fe-42de-92a2-71efc3550ac1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310410 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-audit-policies\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310494 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-etcd-client\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310600 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310668 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbac4dd-abc3-4fe1-9b4c-2106cf81684e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hs9jk\" (UID: \"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310689 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23a02d37-8e7f-4855-a203-a3d9865cdd3b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6gn8b\" (UID: \"23a02d37-8e7f-4855-a203-a3d9865cdd3b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310709 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f4dad69-2226-4d55-95af-7fe3b3cfaf41-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xzjrl\" (UID: \"3f4dad69-2226-4d55-95af-7fe3b3cfaf41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310763 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a982bd3f-5096-4ddc-9a62-9cec039757e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310811 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bbe422dd-b194-4dd1-a363-c9cbe72971f9-metrics-tls\") pod \"dns-operator-744455d44c-t5kh8\" (UID: \"bbe422dd-b194-4dd1-a363-c9cbe72971f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310842 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lr4m\" (UniqueName: \"kubernetes.io/projected/7dbac4dd-abc3-4fe1-9b4c-2106cf81684e-kube-api-access-2lr4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-hs9jk\" (UID: \"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310928 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bxmhd"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.310948 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/677d7062-8915-454b-aff0-999ba539d454-webhook-cert\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311058 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-audit-dir\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311095 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dws\" (UniqueName: \"kubernetes.io/projected/c009d452-642e-47de-994c-cc6e0af791f9-kube-api-access-d8dws\") pod \"marketplace-operator-79b997595-qhgrn\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e271bec2-e43e-4236-a3d5-024b55665af9-serving-cert\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311184 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48206a38-ab99-4560-9cf1-6a260f52c37d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnjwp\" (UID: \"48206a38-ab99-4560-9cf1-6a260f52c37d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311203 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/677d7062-8915-454b-aff0-999ba539d454-apiservice-cert\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311233 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ftsm\" (UniqueName: \"kubernetes.io/projected/a982bd3f-5096-4ddc-9a62-9cec039757e1-kube-api-access-7ftsm\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311259 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccqc7\" (UniqueName: \"kubernetes.io/projected/a08326b5-7ad1-43ef-987c-86b61eeade10-kube-api-access-ccqc7\") pod \"openshift-apiserver-operator-796bbdcf4f-7hj88\" (UID: \"a08326b5-7ad1-43ef-987c-86b61eeade10\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311325 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-client-ca\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311366 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff492068-85f5-4775-b13f-2604432a063e-profile-collector-cert\") pod \"catalog-operator-68c6474976-d75v6\" (UID: \"ff492068-85f5-4775-b13f-2604432a063e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e475ebe-1a52-4423-bc08-5c659c92f7e8-etcd-client\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311430 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9674\" (UniqueName: \"kubernetes.io/projected/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-kube-api-access-h9674\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311462 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/677d7062-8915-454b-aff0-999ba539d454-tmpfs\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311486 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311594 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szxrj\" (UniqueName: \"kubernetes.io/projected/e271bec2-e43e-4236-a3d5-024b55665af9-kube-api-access-szxrj\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311636 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e475ebe-1a52-4423-bc08-5c659c92f7e8-serving-cert\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311666 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0b9441e9-83fe-42de-92a2-71efc3550ac1-srv-cert\") pod \"olm-operator-6b444d44fb-ts729\" (UID: \"0b9441e9-83fe-42de-92a2-71efc3550ac1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311687 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a02d37-8e7f-4855-a203-a3d9865cdd3b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6gn8b\" (UID: \"23a02d37-8e7f-4855-a203-a3d9865cdd3b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311713 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e271bec2-e43e-4236-a3d5-024b55665af9-config\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311734 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wn9g\" (UniqueName: \"kubernetes.io/projected/48206a38-ab99-4560-9cf1-6a260f52c37d-kube-api-access-5wn9g\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnjwp\" (UID: \"48206a38-ab99-4560-9cf1-6a260f52c37d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311753 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a02d37-8e7f-4855-a203-a3d9865cdd3b-config\") pod \"kube-apiserver-operator-766d6c64bb-6gn8b\" (UID: \"23a02d37-8e7f-4855-a203-a3d9865cdd3b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311771 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24ab0c5d-3e8a-4f6a-8060-882262c28888-machine-approver-tls\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311834 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9e475ebe-1a52-4423-bc08-5c659c92f7e8-etcd-ca\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311856 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f4dad69-2226-4d55-95af-7fe3b3cfaf41-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xzjrl\" (UID: \"3f4dad69-2226-4d55-95af-7fe3b3cfaf41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311876 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-config\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311896 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvnd\" (UniqueName: \"kubernetes.io/projected/bcacc8cc-dbaa-4826-9939-0af3e0cc3297-kube-api-access-5mvnd\") pod \"migrator-59844c95c7-ts2rv\" (UID: \"bcacc8cc-dbaa-4826-9939-0af3e0cc3297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a82fe90d-0a61-4da5-bfac-7c69f6a59339-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zprl5\" (UID: \"a82fe90d-0a61-4da5-bfac-7c69f6a59339\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311951 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e271bec2-e43e-4236-a3d5-024b55665af9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.311994 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bb970b4-43c2-46e2-b707-20145a03a2bb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pd9lg\" (UID: \"4bb970b4-43c2-46e2-b707-20145a03a2bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312003 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312029 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f4dad69-2226-4d55-95af-7fe3b3cfaf41-config\") pod \"kube-controller-manager-operator-78b949d7b-xzjrl\" (UID: \"3f4dad69-2226-4d55-95af-7fe3b3cfaf41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312052 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cx9d\" (UniqueName: \"kubernetes.io/projected/4bb970b4-43c2-46e2-b707-20145a03a2bb-kube-api-access-6cx9d\") pod \"control-plane-machine-set-operator-78cbb6b69f-pd9lg\" (UID: \"4bb970b4-43c2-46e2-b707-20145a03a2bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312074 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsv8\" (UniqueName: \"kubernetes.io/projected/24ab0c5d-3e8a-4f6a-8060-882262c28888-kube-api-access-lfsv8\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312103 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08326b5-7ad1-43ef-987c-86b61eeade10-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7hj88\" (UID: \"a08326b5-7ad1-43ef-987c-86b61eeade10\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312128 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e271bec2-e43e-4236-a3d5-024b55665af9-service-ca-bundle\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312148 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48206a38-ab99-4560-9cf1-6a260f52c37d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnjwp\" (UID: \"48206a38-ab99-4560-9cf1-6a260f52c37d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312168 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-serving-cert\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312190 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff492068-85f5-4775-b13f-2604432a063e-srv-cert\") pod \"catalog-operator-68c6474976-d75v6\" (UID: \"ff492068-85f5-4775-b13f-2604432a063e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e475ebe-1a52-4423-bc08-5c659c92f7e8-config\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2smt\" (UniqueName: \"kubernetes.io/projected/bbe422dd-b194-4dd1-a363-c9cbe72971f9-kube-api-access-w2smt\") pod \"dns-operator-744455d44c-t5kh8\" (UID: \"bbe422dd-b194-4dd1-a363-c9cbe72971f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312263 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a08326b5-7ad1-43ef-987c-86b61eeade10-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7hj88\" (UID: \"a08326b5-7ad1-43ef-987c-86b61eeade10\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312287 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4tmk\" (UniqueName: \"kubernetes.io/projected/9e475ebe-1a52-4423-bc08-5c659c92f7e8-kube-api-access-b4tmk\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.312307 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-serving-cert\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314353 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-encryption-config\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314387 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a982bd3f-5096-4ddc-9a62-9cec039757e1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314408 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stw2d\" (UniqueName: \"kubernetes.io/projected/ff492068-85f5-4775-b13f-2604432a063e-kube-api-access-stw2d\") pod \"catalog-operator-68c6474976-d75v6\" (UID: \"ff492068-85f5-4775-b13f-2604432a063e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1e76c1-ba8e-41b1-a312-0670b06bc59f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6g4p8\" (UID: \"5d1e76c1-ba8e-41b1-a312-0670b06bc59f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314459 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qhgrn\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314479 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ab0c5d-3e8a-4f6a-8060-882262c28888-config\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314497 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0b9441e9-83fe-42de-92a2-71efc3550ac1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ts729\" (UID: \"0b9441e9-83fe-42de-92a2-71efc3550ac1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314517 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a82fe90d-0a61-4da5-bfac-7c69f6a59339-serving-cert\") pod \"openshift-config-operator-7777fb866f-zprl5\" (UID: \"a82fe90d-0a61-4da5-bfac-7c69f6a59339\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314550 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24ab0c5d-3e8a-4f6a-8060-882262c28888-auth-proxy-config\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314575 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e475ebe-1a52-4423-bc08-5c659c92f7e8-etcd-service-ca\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314603 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a982bd3f-5096-4ddc-9a62-9cec039757e1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314622 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr7ff\" (UniqueName: \"kubernetes.io/projected/f2e2d019-7fb7-4f75-81ee-b20a700c8f0b-kube-api-access-xr7ff\") pod \"downloads-7954f5f757-ltfqp\" (UID: \"f2e2d019-7fb7-4f75-81ee-b20a700c8f0b\") " pod="openshift-console/downloads-7954f5f757-ltfqp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314639 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvpf\" (UniqueName: \"kubernetes.io/projected/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-kube-api-access-hqvpf\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314665 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g5xl\" (UniqueName: \"kubernetes.io/projected/a82fe90d-0a61-4da5-bfac-7c69f6a59339-kube-api-access-4g5xl\") pod \"openshift-config-operator-7777fb866f-zprl5\" (UID: \"a82fe90d-0a61-4da5-bfac-7c69f6a59339\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314683 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbac4dd-abc3-4fe1-9b4c-2106cf81684e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hs9jk\" (UID: \"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314700 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qhgrn\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.314891 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qwn64"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.315243 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.315607 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4ddms"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.316105 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.316277 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.323705 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2ggnl"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.325887 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.326438 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.328550 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.331664 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.337134 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zprl5"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.337846 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.338613 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.342583 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.345827 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.357938 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-859tc"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.361331 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6qfnr"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.362481 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-r49vt"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.363324 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.363748 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.364099 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.366023 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.366968 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.368441 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qhgrn"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.369901 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.371314 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ltfqp"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.372891 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.374145 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.375166 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.376339 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6g4p8"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.377676 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r6xbk"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.378831 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.382289 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.383218 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4ddms"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.390575 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.401708 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.404527 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.405865 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mr9hz"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.407091 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.408153 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.409267 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wm5zr"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.410322 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.411428 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t5kh8"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.412569 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.413586 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.414636 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415178 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b48287-9900-438d-b356-0859859a90e8-serving-cert\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415208 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28c00736-23cd-46f9-b656-94a11f71a470-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pkjfq\" (UID: \"28c00736-23cd-46f9-b656-94a11f71a470\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415230 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e05300a3-a8c8-495f-a4fd-79f326ce0d73-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415248 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68d73a51-598c-41e8-9064-3942bd4f93df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415267 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-etcd-serving-ca\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415285 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qhgrn\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415397 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ab0c5d-3e8a-4f6a-8060-882262c28888-config\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415432 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e05300a3-a8c8-495f-a4fd-79f326ce0d73-trusted-ca\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415668 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0b9441e9-83fe-42de-92a2-71efc3550ac1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ts729\" (UID: \"0b9441e9-83fe-42de-92a2-71efc3550ac1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415736 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a82fe90d-0a61-4da5-bfac-7c69f6a59339-serving-cert\") pod \"openshift-config-operator-7777fb866f-zprl5\" (UID: \"a82fe90d-0a61-4da5-bfac-7c69f6a59339\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415768 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85sb\" (UniqueName: \"kubernetes.io/projected/28c00736-23cd-46f9-b656-94a11f71a470-kube-api-access-b85sb\") pod \"machine-config-controller-84d6567774-pkjfq\" (UID: \"28c00736-23cd-46f9-b656-94a11f71a470\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415799 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a982bd3f-5096-4ddc-9a62-9cec039757e1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415820 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvpf\" (UniqueName: \"kubernetes.io/projected/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-kube-api-access-hqvpf\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415839 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qhgrn\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415862 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-proxy-tls\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415885 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cd808-feeb-4944-a1b3-99109ea0928e-serving-cert\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415907 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g5xl\" (UniqueName: \"kubernetes.io/projected/a82fe90d-0a61-4da5-bfac-7c69f6a59339-kube-api-access-4g5xl\") pod \"openshift-config-operator-7777fb866f-zprl5\" (UID: \"a82fe90d-0a61-4da5-bfac-7c69f6a59339\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415925 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-policies\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415946 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753527d9-7500-4526-be72-3306582c5f7d-config\") pod \"service-ca-operator-777779d784-nwtjc\" (UID: \"753527d9-7500-4526-be72-3306582c5f7d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.415968 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf5d3de7-2818-4aa8-9c56-81244d431713-serving-cert\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416017 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-etcd-client\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416051 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416071 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416090 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-config\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416110 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbac4dd-abc3-4fe1-9b4c-2106cf81684e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hs9jk\" (UID: \"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416129 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-console-config\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416151 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55c59494-3868-4467-a864-52b1b3385b5e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrzb2\" (UID: \"55c59494-3868-4467-a864-52b1b3385b5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416171 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416191 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f4dad69-2226-4d55-95af-7fe3b3cfaf41-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xzjrl\" (UID: \"3f4dad69-2226-4d55-95af-7fe3b3cfaf41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416210 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28c00736-23cd-46f9-b656-94a11f71a470-proxy-tls\") pod \"machine-config-controller-84d6567774-pkjfq\" (UID: \"28c00736-23cd-46f9-b656-94a11f71a470\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416230 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68d73a51-598c-41e8-9064-3942bd4f93df-images\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416241 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m8qq4"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416246 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416272 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a982bd3f-5096-4ddc-9a62-9cec039757e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416291 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/677d7062-8915-454b-aff0-999ba539d454-webhook-cert\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416309 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dws\" (UniqueName: \"kubernetes.io/projected/c009d452-642e-47de-994c-cc6e0af791f9-kube-api-access-d8dws\") pod \"marketplace-operator-79b997595-qhgrn\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416328 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48206a38-ab99-4560-9cf1-6a260f52c37d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnjwp\" (UID: \"48206a38-ab99-4560-9cf1-6a260f52c37d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416345 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-images\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416393 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e271bec2-e43e-4236-a3d5-024b55665af9-serving-cert\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416429 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-audit\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416457 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e475ebe-1a52-4423-bc08-5c659c92f7e8-etcd-client\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416477 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/677d7062-8915-454b-aff0-999ba539d454-tmpfs\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-serving-cert\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416517 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416554 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghgz4\" (UniqueName: \"kubernetes.io/projected/56d7f37b-05cc-4a36-b844-423465e79e8e-kube-api-access-ghgz4\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szxrj\" (UniqueName: \"kubernetes.io/projected/e271bec2-e43e-4236-a3d5-024b55665af9-kube-api-access-szxrj\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416599 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf5d3de7-2818-4aa8-9c56-81244d431713-encryption-config\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416617 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6lv\" (UniqueName: \"kubernetes.io/projected/34e19426-cb00-4e09-9933-57a015735c77-kube-api-access-zc6lv\") pod \"cluster-samples-operator-665b6dd947-bzdxk\" (UID: \"34e19426-cb00-4e09-9933-57a015735c77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416639 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e475ebe-1a52-4423-bc08-5c659c92f7e8-serving-cert\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416658 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-image-import-ca\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416682 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e271bec2-e43e-4236-a3d5-024b55665af9-config\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wn9g\" (UniqueName: \"kubernetes.io/projected/48206a38-ab99-4560-9cf1-6a260f52c37d-kube-api-access-5wn9g\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnjwp\" (UID: \"48206a38-ab99-4560-9cf1-6a260f52c37d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416707 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69rqv"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qhgrn\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.417387 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a02d37-8e7f-4855-a203-a3d9865cdd3b-config\") pod \"kube-apiserver-operator-766d6c64bb-6gn8b\" (UID: \"23a02d37-8e7f-4855-a203-a3d9865cdd3b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418112 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23a02d37-8e7f-4855-a203-a3d9865cdd3b-config\") pod \"kube-apiserver-operator-766d6c64bb-6gn8b\" (UID: \"23a02d37-8e7f-4855-a203-a3d9865cdd3b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.416203 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24ab0c5d-3e8a-4f6a-8060-882262c28888-config\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418491 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418548 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-config\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418576 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mvnd\" (UniqueName: \"kubernetes.io/projected/bcacc8cc-dbaa-4826-9939-0af3e0cc3297-kube-api-access-5mvnd\") pod \"migrator-59844c95c7-ts2rv\" (UID: \"bcacc8cc-dbaa-4826-9939-0af3e0cc3297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418597 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf5d3de7-2818-4aa8-9c56-81244d431713-audit-dir\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418647 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f4dad69-2226-4d55-95af-7fe3b3cfaf41-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xzjrl\" (UID: \"3f4dad69-2226-4d55-95af-7fe3b3cfaf41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418671 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swjv\" (UniqueName: \"kubernetes.io/projected/e05300a3-a8c8-495f-a4fd-79f326ce0d73-kube-api-access-5swjv\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418693 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bz49\" (UniqueName: \"kubernetes.io/projected/98eaa352-8307-40bd-b8a3-16f2e3088fa4-kube-api-access-8bz49\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418720 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a82fe90d-0a61-4da5-bfac-7c69f6a59339-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zprl5\" (UID: \"a82fe90d-0a61-4da5-bfac-7c69f6a59339\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418743 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bb970b4-43c2-46e2-b707-20145a03a2bb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pd9lg\" (UID: \"4bb970b4-43c2-46e2-b707-20145a03a2bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418765 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr89j\" (UniqueName: \"kubernetes.io/projected/d2b48287-9900-438d-b356-0859859a90e8-kube-api-access-hr89j\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418789 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f4dad69-2226-4d55-95af-7fe3b3cfaf41-config\") pod \"kube-controller-manager-operator-78b949d7b-xzjrl\" (UID: \"3f4dad69-2226-4d55-95af-7fe3b3cfaf41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418810 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfsv8\" (UniqueName: \"kubernetes.io/projected/24ab0c5d-3e8a-4f6a-8060-882262c28888-kube-api-access-lfsv8\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418834 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418861 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08326b5-7ad1-43ef-987c-86b61eeade10-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7hj88\" (UID: \"a08326b5-7ad1-43ef-987c-86b61eeade10\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418887 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kxts\" (UniqueName: \"kubernetes.io/projected/753527d9-7500-4526-be72-3306582c5f7d-kube-api-access-4kxts\") pod \"service-ca-operator-777779d784-nwtjc\" (UID: \"753527d9-7500-4526-be72-3306582c5f7d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418912 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.418941 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48206a38-ab99-4560-9cf1-6a260f52c37d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnjwp\" (UID: \"48206a38-ab99-4560-9cf1-6a260f52c37d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.419969 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a82fe90d-0a61-4da5-bfac-7c69f6a59339-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zprl5\" (UID: \"a82fe90d-0a61-4da5-bfac-7c69f6a59339\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420010 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48206a38-ab99-4560-9cf1-6a260f52c37d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnjwp\" (UID: \"48206a38-ab99-4560-9cf1-6a260f52c37d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420082 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff492068-85f5-4775-b13f-2604432a063e-srv-cert\") pod \"catalog-operator-68c6474976-d75v6\" (UID: \"ff492068-85f5-4775-b13f-2604432a063e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420119 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a08326b5-7ad1-43ef-987c-86b61eeade10-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7hj88\" (UID: \"a08326b5-7ad1-43ef-987c-86b61eeade10\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e475ebe-1a52-4423-bc08-5c659c92f7e8-config\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420197 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2smt\" (UniqueName: \"kubernetes.io/projected/bbe422dd-b194-4dd1-a363-c9cbe72971f9-kube-api-access-w2smt\") pod \"dns-operator-744455d44c-t5kh8\" (UID: \"bbe422dd-b194-4dd1-a363-c9cbe72971f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-serving-cert\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420262 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6rg\" (UniqueName: \"kubernetes.io/projected/00061f00-b799-407e-8b71-30de57b92847-kube-api-access-px6rg\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420301 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4tmk\" (UniqueName: \"kubernetes.io/projected/9e475ebe-1a52-4423-bc08-5c659c92f7e8-kube-api-access-b4tmk\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-encryption-config\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420399 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a982bd3f-5096-4ddc-9a62-9cec039757e1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420429 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-config\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420457 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nthlh\" (UniqueName: \"kubernetes.io/projected/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-kube-api-access-nthlh\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420616 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stw2d\" (UniqueName: \"kubernetes.io/projected/ff492068-85f5-4775-b13f-2604432a063e-kube-api-access-stw2d\") pod \"catalog-operator-68c6474976-d75v6\" (UID: \"ff492068-85f5-4775-b13f-2604432a063e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420656 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/884fd52c-4588-455a-a5b7-b333dc17aa3c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sk2nz\" (UID: \"884fd52c-4588-455a-a5b7-b333dc17aa3c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420691 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1e76c1-ba8e-41b1-a312-0670b06bc59f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6g4p8\" (UID: \"5d1e76c1-ba8e-41b1-a312-0670b06bc59f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420720 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf5d3de7-2818-4aa8-9c56-81244d431713-etcd-client\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420752 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420779 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-client-ca\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420808 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753527d9-7500-4526-be72-3306582c5f7d-serving-cert\") pod \"service-ca-operator-777779d784-nwtjc\" (UID: \"753527d9-7500-4526-be72-3306582c5f7d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420838 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24ab0c5d-3e8a-4f6a-8060-882262c28888-auth-proxy-config\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420868 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr7ff\" (UniqueName: \"kubernetes.io/projected/f2e2d019-7fb7-4f75-81ee-b20a700c8f0b-kube-api-access-xr7ff\") pod \"downloads-7954f5f757-ltfqp\" (UID: \"f2e2d019-7fb7-4f75-81ee-b20a700c8f0b\") " pod="openshift-console/downloads-7954f5f757-ltfqp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420898 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb557\" (UniqueName: \"kubernetes.io/projected/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-kube-api-access-qb557\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420924 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/34e19426-cb00-4e09-9933-57a015735c77-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bzdxk\" (UID: \"34e19426-cb00-4e09-9933-57a015735c77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420956 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e475ebe-1a52-4423-bc08-5c659c92f7e8-etcd-service-ca\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.420986 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbac4dd-abc3-4fe1-9b4c-2106cf81684e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hs9jk\" (UID: \"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e05300a3-a8c8-495f-a4fd-79f326ce0d73-metrics-tls\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421040 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-audit-policies\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421189 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421204 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqkz\" (UniqueName: \"kubernetes.io/projected/1b035abe-aba2-464c-bf93-43ca7da14869-kube-api-access-9tqkz\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421241 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x28kc\" (UniqueName: \"kubernetes.io/projected/5d1e76c1-ba8e-41b1-a312-0670b06bc59f-kube-api-access-x28kc\") pod \"multus-admission-controller-857f4d67dd-6g4p8\" (UID: \"5d1e76c1-ba8e-41b1-a312-0670b06bc59f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421268 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9gmn\" (UniqueName: \"kubernetes.io/projected/677d7062-8915-454b-aff0-999ba539d454-kube-api-access-m9gmn\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421296 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb7x2\" (UniqueName: \"kubernetes.io/projected/0b9441e9-83fe-42de-92a2-71efc3550ac1-kube-api-access-qb7x2\") pod \"olm-operator-6b444d44fb-ts729\" (UID: \"0b9441e9-83fe-42de-92a2-71efc3550ac1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421326 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-key\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421351 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421381 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23a02d37-8e7f-4855-a203-a3d9865cdd3b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6gn8b\" (UID: \"23a02d37-8e7f-4855-a203-a3d9865cdd3b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421408 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-service-ca\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421439 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-audit-dir\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421469 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bbe422dd-b194-4dd1-a363-c9cbe72971f9-metrics-tls\") pod \"dns-operator-744455d44c-t5kh8\" (UID: \"bbe422dd-b194-4dd1-a363-c9cbe72971f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421496 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lr4m\" (UniqueName: \"kubernetes.io/projected/7dbac4dd-abc3-4fe1-9b4c-2106cf81684e-kube-api-access-2lr4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-hs9jk\" (UID: \"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421523 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/677d7062-8915-454b-aff0-999ba539d454-apiservice-cert\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421574 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-oauth-serving-cert\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421602 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7l8\" (UniqueName: \"kubernetes.io/projected/68d73a51-598c-41e8-9064-3942bd4f93df-kube-api-access-kx7l8\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421603 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f4dad69-2226-4d55-95af-7fe3b3cfaf41-config\") pod \"kube-controller-manager-operator-78b949d7b-xzjrl\" (UID: \"3f4dad69-2226-4d55-95af-7fe3b3cfaf41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421632 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaa352-8307-40bd-b8a3-16f2e3088fa4-secret-volume\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421719 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-certs\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421762 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-config\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421816 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ftsm\" (UniqueName: \"kubernetes.io/projected/a982bd3f-5096-4ddc-9a62-9cec039757e1-kube-api-access-7ftsm\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421858 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccqc7\" (UniqueName: \"kubernetes.io/projected/a08326b5-7ad1-43ef-987c-86b61eeade10-kube-api-access-ccqc7\") pod \"openshift-apiserver-operator-796bbdcf4f-7hj88\" (UID: \"a08326b5-7ad1-43ef-987c-86b61eeade10\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421896 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-trusted-ca-bundle\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.421938 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c59494-3868-4467-a864-52b1b3385b5e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrzb2\" (UID: \"55c59494-3868-4467-a864-52b1b3385b5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422031 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-client-ca\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422074 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff492068-85f5-4775-b13f-2604432a063e-profile-collector-cert\") pod \"catalog-operator-68c6474976-d75v6\" (UID: \"ff492068-85f5-4775-b13f-2604432a063e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422110 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d73a51-598c-41e8-9064-3942bd4f93df-config\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422152 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbs75\" (UniqueName: \"kubernetes.io/projected/bf5d3de7-2818-4aa8-9c56-81244d431713-kube-api-access-vbs75\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422179 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a982bd3f-5096-4ddc-9a62-9cec039757e1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422189 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bf5d3de7-2818-4aa8-9c56-81244d431713-node-pullsecrets\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422227 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9674\" (UniqueName: \"kubernetes.io/projected/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-kube-api-access-h9674\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422241 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24ab0c5d-3e8a-4f6a-8060-882262c28888-auth-proxy-config\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422249 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-audit-policies\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422263 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5bms\" (UniqueName: \"kubernetes.io/projected/192cd808-feeb-4944-a1b3-99109ea0928e-kube-api-access-v5bms\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422314 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422398 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c59494-3868-4467-a864-52b1b3385b5e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrzb2\" (UID: \"55c59494-3868-4467-a864-52b1b3385b5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422475 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-cabundle\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a02d37-8e7f-4855-a203-a3d9865cdd3b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6gn8b\" (UID: \"23a02d37-8e7f-4855-a203-a3d9865cdd3b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422577 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0b9441e9-83fe-42de-92a2-71efc3550ac1-srv-cert\") pod \"olm-operator-6b444d44fb-ts729\" (UID: \"0b9441e9-83fe-42de-92a2-71efc3550ac1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24ab0c5d-3e8a-4f6a-8060-882262c28888-machine-approver-tls\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422658 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaa352-8307-40bd-b8a3-16f2e3088fa4-config-volume\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422675 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dbac4dd-abc3-4fe1-9b4c-2106cf81684e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hs9jk\" (UID: \"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422866 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9e475ebe-1a52-4423-bc08-5c659c92f7e8-etcd-ca\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422890 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-config\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e271bec2-e43e-4236-a3d5-024b55665af9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422940 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422951 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9e475ebe-1a52-4423-bc08-5c659c92f7e8-etcd-service-ca\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422962 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-trusted-ca\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.422992 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cx9d\" (UniqueName: \"kubernetes.io/projected/4bb970b4-43c2-46e2-b707-20145a03a2bb-kube-api-access-6cx9d\") pod \"control-plane-machine-set-operator-78cbb6b69f-pd9lg\" (UID: \"4bb970b4-43c2-46e2-b707-20145a03a2bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.423012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-node-bootstrap-token\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.423036 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e271bec2-e43e-4236-a3d5-024b55665af9-service-ca-bundle\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.423054 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-serving-cert\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.423073 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-oauth-config\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.423093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqcq\" (UniqueName: \"kubernetes.io/projected/884fd52c-4588-455a-a5b7-b333dc17aa3c-kube-api-access-vhqcq\") pod \"package-server-manager-789f6589d5-sk2nz\" (UID: \"884fd52c-4588-455a-a5b7-b333dc17aa3c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.423110 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-dir\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.423127 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.423900 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-client-ca\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.424221 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e475ebe-1a52-4423-bc08-5c659c92f7e8-config\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.424567 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48206a38-ab99-4560-9cf1-6a260f52c37d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnjwp\" (UID: \"48206a38-ab99-4560-9cf1-6a260f52c37d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.424806 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dbac4dd-abc3-4fe1-9b4c-2106cf81684e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hs9jk\" (UID: \"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.424853 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-etcd-client\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.425289 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9e475ebe-1a52-4423-bc08-5c659c92f7e8-etcd-ca\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.426047 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a982bd3f-5096-4ddc-9a62-9cec039757e1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.426251 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-audit-dir\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.427115 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a82fe90d-0a61-4da5-bfac-7c69f6a59339-serving-cert\") pod \"openshift-config-operator-7777fb866f-zprl5\" (UID: \"a82fe90d-0a61-4da5-bfac-7c69f6a59339\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.427437 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a08326b5-7ad1-43ef-987c-86b61eeade10-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7hj88\" (UID: \"a08326b5-7ad1-43ef-987c-86b61eeade10\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.427610 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/677d7062-8915-454b-aff0-999ba539d454-tmpfs\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.428906 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.429653 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-q644h"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.430646 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23a02d37-8e7f-4855-a203-a3d9865cdd3b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-6gn8b\" (UID: \"23a02d37-8e7f-4855-a203-a3d9865cdd3b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.430817 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q644h" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.431021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0b9441e9-83fe-42de-92a2-71efc3550ac1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ts729\" (UID: \"0b9441e9-83fe-42de-92a2-71efc3550ac1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.431154 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff492068-85f5-4775-b13f-2604432a063e-profile-collector-cert\") pod \"catalog-operator-68c6474976-d75v6\" (UID: \"ff492068-85f5-4775-b13f-2604432a063e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.431427 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1e76c1-ba8e-41b1-a312-0670b06bc59f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6g4p8\" (UID: \"5d1e76c1-ba8e-41b1-a312-0670b06bc59f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.431801 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff492068-85f5-4775-b13f-2604432a063e-srv-cert\") pod \"catalog-operator-68c6474976-d75v6\" (UID: \"ff492068-85f5-4775-b13f-2604432a063e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.432561 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9e475ebe-1a52-4423-bc08-5c659c92f7e8-serving-cert\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.432570 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/24ab0c5d-3e8a-4f6a-8060-882262c28888-machine-approver-tls\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.433074 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/677d7062-8915-454b-aff0-999ba539d454-apiservice-cert\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.433927 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f4dad69-2226-4d55-95af-7fe3b3cfaf41-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xzjrl\" (UID: \"3f4dad69-2226-4d55-95af-7fe3b3cfaf41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.434027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08326b5-7ad1-43ef-987c-86b61eeade10-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7hj88\" (UID: \"a08326b5-7ad1-43ef-987c-86b61eeade10\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.434282 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0b9441e9-83fe-42de-92a2-71efc3550ac1-srv-cert\") pod \"olm-operator-6b444d44fb-ts729\" (UID: \"0b9441e9-83fe-42de-92a2-71efc3550ac1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.436490 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qhgrn\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.436615 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.436624 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.438117 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69rqv"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.439954 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4bb970b4-43c2-46e2-b707-20145a03a2bb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-pd9lg\" (UID: \"4bb970b4-43c2-46e2-b707-20145a03a2bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.442747 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-46nhj"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.445090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9e475ebe-1a52-4423-bc08-5c659c92f7e8-etcd-client\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.445705 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/677d7062-8915-454b-aff0-999ba539d454-webhook-cert\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.447205 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.455017 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-serving-cert\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.458428 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2ggnl"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.461488 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q644h"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.461520 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bxmhd"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.462518 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5fxsd"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.463197 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.463796 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5fxsd"] Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.463917 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5fxsd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.481843 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.501608 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.523117 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.524772 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28c00736-23cd-46f9-b656-94a11f71a470-proxy-tls\") pod \"machine-config-controller-84d6567774-pkjfq\" (UID: \"28c00736-23cd-46f9-b656-94a11f71a470\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.524805 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68d73a51-598c-41e8-9064-3942bd4f93df-images\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.524829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.524850 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.524878 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-images\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.524913 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-audit\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.524935 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-serving-cert\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.524955 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.524978 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghgz4\" (UniqueName: \"kubernetes.io/projected/56d7f37b-05cc-4a36-b844-423465e79e8e-kube-api-access-ghgz4\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525006 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc6lv\" (UniqueName: \"kubernetes.io/projected/34e19426-cb00-4e09-9933-57a015735c77-kube-api-access-zc6lv\") pod \"cluster-samples-operator-665b6dd947-bzdxk\" (UID: \"34e19426-cb00-4e09-9933-57a015735c77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525026 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-image-import-ca\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525046 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf5d3de7-2818-4aa8-9c56-81244d431713-encryption-config\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525079 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525103 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525122 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf5d3de7-2818-4aa8-9c56-81244d431713-audit-dir\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525149 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bz49\" (UniqueName: \"kubernetes.io/projected/98eaa352-8307-40bd-b8a3-16f2e3088fa4-kube-api-access-8bz49\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525168 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5swjv\" (UniqueName: \"kubernetes.io/projected/e05300a3-a8c8-495f-a4fd-79f326ce0d73-kube-api-access-5swjv\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525193 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525210 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr89j\" (UniqueName: \"kubernetes.io/projected/d2b48287-9900-438d-b356-0859859a90e8-kube-api-access-hr89j\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525229 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kxts\" (UniqueName: \"kubernetes.io/projected/753527d9-7500-4526-be72-3306582c5f7d-kube-api-access-4kxts\") pod \"service-ca-operator-777779d784-nwtjc\" (UID: \"753527d9-7500-4526-be72-3306582c5f7d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525250 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525280 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px6rg\" (UniqueName: \"kubernetes.io/projected/00061f00-b799-407e-8b71-30de57b92847-kube-api-access-px6rg\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525315 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-config\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525358 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/884fd52c-4588-455a-a5b7-b333dc17aa3c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sk2nz\" (UID: \"884fd52c-4588-455a-a5b7-b333dc17aa3c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525376 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nthlh\" (UniqueName: \"kubernetes.io/projected/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-kube-api-access-nthlh\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525393 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf5d3de7-2818-4aa8-9c56-81244d431713-etcd-client\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525408 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525424 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-client-ca\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525442 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753527d9-7500-4526-be72-3306582c5f7d-serving-cert\") pod \"service-ca-operator-777779d784-nwtjc\" (UID: \"753527d9-7500-4526-be72-3306582c5f7d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525460 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb557\" (UniqueName: \"kubernetes.io/projected/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-kube-api-access-qb557\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525476 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/34e19426-cb00-4e09-9933-57a015735c77-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bzdxk\" (UID: \"34e19426-cb00-4e09-9933-57a015735c77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525502 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e05300a3-a8c8-495f-a4fd-79f326ce0d73-metrics-tls\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525523 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tqkz\" (UniqueName: \"kubernetes.io/projected/1b035abe-aba2-464c-bf93-43ca7da14869-kube-api-access-9tqkz\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525580 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-key\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525598 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-service-ca\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525657 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-oauth-serving-cert\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525674 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7l8\" (UniqueName: \"kubernetes.io/projected/68d73a51-598c-41e8-9064-3942bd4f93df-kube-api-access-kx7l8\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525690 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaa352-8307-40bd-b8a3-16f2e3088fa4-secret-volume\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525696 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-images\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525708 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-config\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525739 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-certs\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525756 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-trusted-ca-bundle\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525793 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c59494-3868-4467-a864-52b1b3385b5e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrzb2\" (UID: \"55c59494-3868-4467-a864-52b1b3385b5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525812 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d73a51-598c-41e8-9064-3942bd4f93df-config\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525830 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbs75\" (UniqueName: \"kubernetes.io/projected/bf5d3de7-2818-4aa8-9c56-81244d431713-kube-api-access-vbs75\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5bms\" (UniqueName: \"kubernetes.io/projected/192cd808-feeb-4944-a1b3-99109ea0928e-kube-api-access-v5bms\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525874 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525891 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bf5d3de7-2818-4aa8-9c56-81244d431713-node-pullsecrets\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525910 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c59494-3868-4467-a864-52b1b3385b5e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrzb2\" (UID: \"55c59494-3868-4467-a864-52b1b3385b5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525943 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-cabundle\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525972 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaa352-8307-40bd-b8a3-16f2e3088fa4-config-volume\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.525996 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526011 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-trusted-ca\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526036 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-node-bootstrap-token\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526064 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-oauth-config\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526082 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqcq\" (UniqueName: \"kubernetes.io/projected/884fd52c-4588-455a-a5b7-b333dc17aa3c-kube-api-access-vhqcq\") pod \"package-server-manager-789f6589d5-sk2nz\" (UID: \"884fd52c-4588-455a-a5b7-b333dc17aa3c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526100 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-dir\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526140 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28c00736-23cd-46f9-b656-94a11f71a470-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pkjfq\" (UID: \"28c00736-23cd-46f9-b656-94a11f71a470\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526159 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e05300a3-a8c8-495f-a4fd-79f326ce0d73-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526217 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68d73a51-598c-41e8-9064-3942bd4f93df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526234 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-etcd-serving-ca\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526250 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b48287-9900-438d-b356-0859859a90e8-serving-cert\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526278 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e05300a3-a8c8-495f-a4fd-79f326ce0d73-trusted-ca\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526316 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b85sb\" (UniqueName: \"kubernetes.io/projected/28c00736-23cd-46f9-b656-94a11f71a470-kube-api-access-b85sb\") pod \"machine-config-controller-84d6567774-pkjfq\" (UID: \"28c00736-23cd-46f9-b656-94a11f71a470\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526342 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-proxy-tls\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526361 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cd808-feeb-4944-a1b3-99109ea0928e-serving-cert\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526395 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-policies\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526412 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753527d9-7500-4526-be72-3306582c5f7d-config\") pod \"service-ca-operator-777779d784-nwtjc\" (UID: \"753527d9-7500-4526-be72-3306582c5f7d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526430 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526447 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf5d3de7-2818-4aa8-9c56-81244d431713-serving-cert\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526463 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-console-config\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526479 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55c59494-3868-4467-a864-52b1b3385b5e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrzb2\" (UID: \"55c59494-3868-4467-a864-52b1b3385b5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.526497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-config\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.527086 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bf5d3de7-2818-4aa8-9c56-81244d431713-node-pullsecrets\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.527113 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf5d3de7-2818-4aa8-9c56-81244d431713-audit-dir\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.528217 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-dir\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.528600 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.530336 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e05300a3-a8c8-495f-a4fd-79f326ce0d73-trusted-ca\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.530698 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/28c00736-23cd-46f9-b656-94a11f71a470-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pkjfq\" (UID: \"28c00736-23cd-46f9-b656-94a11f71a470\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.530763 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/28c00736-23cd-46f9-b656-94a11f71a470-proxy-tls\") pod \"machine-config-controller-84d6567774-pkjfq\" (UID: \"28c00736-23cd-46f9-b656-94a11f71a470\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.531575 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e05300a3-a8c8-495f-a4fd-79f326ce0d73-metrics-tls\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.531622 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/884fd52c-4588-455a-a5b7-b333dc17aa3c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sk2nz\" (UID: \"884fd52c-4588-455a-a5b7-b333dc17aa3c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.532453 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaa352-8307-40bd-b8a3-16f2e3088fa4-secret-volume\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.534327 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/34e19426-cb00-4e09-9933-57a015735c77-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bzdxk\" (UID: \"34e19426-cb00-4e09-9933-57a015735c77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.539657 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-encryption-config\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.539922 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e271bec2-e43e-4236-a3d5-024b55665af9-config\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.539979 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-serving-cert\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.540301 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bbe422dd-b194-4dd1-a363-c9cbe72971f9-metrics-tls\") pod \"dns-operator-744455d44c-t5kh8\" (UID: \"bbe422dd-b194-4dd1-a363-c9cbe72971f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.540656 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e271bec2-e43e-4236-a3d5-024b55665af9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.542855 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.546129 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e271bec2-e43e-4236-a3d5-024b55665af9-serving-cert\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.546304 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e271bec2-e43e-4236-a3d5-024b55665af9-service-ca-bundle\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.546492 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-proxy-tls\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.562958 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.571102 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cd808-feeb-4944-a1b3-99109ea0928e-serving-cert\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.582186 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.588344 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-config\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.602556 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.609080 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-client-ca\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.637095 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.638951 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.642472 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.663862 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.669889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-etcd-serving-ca\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.683476 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.702230 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.710699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf5d3de7-2818-4aa8-9c56-81244d431713-etcd-client\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.722461 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.730629 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf5d3de7-2818-4aa8-9c56-81244d431713-serving-cert\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.742989 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.749728 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf5d3de7-2818-4aa8-9c56-81244d431713-encryption-config\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.762807 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.769290 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-image-import-ca\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.790432 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.800941 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-trusted-ca-bundle\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.803003 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.822810 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.867255 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.867706 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.877180 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-audit\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.877468 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf5d3de7-2818-4aa8-9c56-81244d431713-config\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.884120 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.903334 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.909941 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-oauth-serving-cert\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.923288 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.933020 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-serving-cert\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.943209 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.951241 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-oauth-config\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.962077 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.982151 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 06:48:45 crc kubenswrapper[4796]: I0127 06:48:45.989282 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-console-config\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.002069 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.009711 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-service-ca\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.030025 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.038111 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-trusted-ca-bundle\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.042440 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.052309 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/753527d9-7500-4526-be72-3306582c5f7d-config\") pod \"service-ca-operator-777779d784-nwtjc\" (UID: \"753527d9-7500-4526-be72-3306582c5f7d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.064050 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.083313 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.093032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/753527d9-7500-4526-be72-3306582c5f7d-serving-cert\") pod \"service-ca-operator-777779d784-nwtjc\" (UID: \"753527d9-7500-4526-be72-3306582c5f7d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.103353 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.123360 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.143310 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.163171 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.169475 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d73a51-598c-41e8-9064-3942bd4f93df-config\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.183337 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.193509 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/55c59494-3868-4467-a864-52b1b3385b5e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrzb2\" (UID: \"55c59494-3868-4467-a864-52b1b3385b5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.203961 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.214440 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/68d73a51-598c-41e8-9064-3942bd4f93df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.222965 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.243259 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.246398 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/68d73a51-598c-41e8-9064-3942bd4f93df-images\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.263387 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.268112 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55c59494-3868-4467-a864-52b1b3385b5e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrzb2\" (UID: \"55c59494-3868-4467-a864-52b1b3385b5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.283030 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.293745 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.314695 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.318813 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.320896 4796 request.go:700] Waited for 1.008531751s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&limit=500&resourceVersion=0 Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.323510 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.330595 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.343622 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.352828 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.363697 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.371174 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.398336 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.408280 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.413609 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.423502 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.424773 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.443013 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.450857 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.463436 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.483819 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.490962 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.502846 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.509129 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-policies\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.522501 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.527862 4796 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.527917 4796 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.527940 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-node-bootstrap-token podName:1b035abe-aba2-464c-bf93-43ca7da14869 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:47.027922411 +0000 UTC m=+148.134889738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-node-bootstrap-token") pod "machine-config-server-r49vt" (UID: "1b035abe-aba2-464c-bf93-43ca7da14869") : failed to sync secret cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528008 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-service-ca podName:56d7f37b-05cc-4a36-b844-423465e79e8e nodeName:}" failed. No retries permitted until 2026-01-27 06:48:47.027983114 +0000 UTC m=+148.134950451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-service-ca") pod "oauth-openshift-558db77b4-bxmhd" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e") : failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528031 4796 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528101 4796 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528122 4796 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528032 4796 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528194 4796 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528163 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-cabundle podName:ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f nodeName:}" failed. No retries permitted until 2026-01-27 06:48:47.028130757 +0000 UTC m=+148.135098104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-cabundle") pod "service-ca-9c57cc56f-2ggnl" (UID: "ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f") : failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528297 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-trusted-ca podName:d2b48287-9900-438d-b356-0859859a90e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:47.02823539 +0000 UTC m=+148.135202757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-trusted-ca") pod "console-operator-58897d9998-4ddms" (UID: "d2b48287-9900-438d-b356-0859859a90e8") : failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528346 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98eaa352-8307-40bd-b8a3-16f2e3088fa4-config-volume podName:98eaa352-8307-40bd-b8a3-16f2e3088fa4 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:47.028326692 +0000 UTC m=+148.135294059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/98eaa352-8307-40bd-b8a3-16f2e3088fa4-config-volume") pod "collect-profiles-29491605-kd67n" (UID: "98eaa352-8307-40bd-b8a3-16f2e3088fa4") : failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528376 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-certs podName:1b035abe-aba2-464c-bf93-43ca7da14869 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:47.028363523 +0000 UTC m=+148.135330880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-certs") pod "machine-config-server-r49vt" (UID: "1b035abe-aba2-464c-bf93-43ca7da14869") : failed to sync secret cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.528412 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-config podName:d2b48287-9900-438d-b356-0859859a90e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:47.028399914 +0000 UTC m=+148.135367271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-config") pod "console-operator-58897d9998-4ddms" (UID: "d2b48287-9900-438d-b356-0859859a90e8") : failed to sync configmap cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.529063 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.529154 4796 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.529184 4796 secret.go:188] Couldn't get secret openshift-console-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.529197 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-key podName:ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f nodeName:}" failed. No retries permitted until 2026-01-27 06:48:47.029184054 +0000 UTC m=+148.136151381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-key") pod "service-ca-9c57cc56f-2ggnl" (UID: "ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f") : failed to sync secret cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: E0127 06:48:46.529286 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2b48287-9900-438d-b356-0859859a90e8-serving-cert podName:d2b48287-9900-438d-b356-0859859a90e8 nodeName:}" failed. No retries permitted until 2026-01-27 06:48:47.029263626 +0000 UTC m=+148.136231183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d2b48287-9900-438d-b356-0859859a90e8-serving-cert") pod "console-operator-58897d9998-4ddms" (UID: "d2b48287-9900-438d-b356-0859859a90e8") : failed to sync secret cache: timed out waiting for the condition Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.542369 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.562824 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.582604 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.602998 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.623407 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.653929 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.663624 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.684375 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.703831 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.722746 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.742668 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.763662 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.782841 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.802648 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.823367 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.842770 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.862448 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.884159 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.903264 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.922876 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.943569 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 06:48:46 crc kubenswrapper[4796]: I0127 06:48:46.962965 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.002647 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.023682 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.043693 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.054710 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.054839 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-config\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.054907 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-key\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.054953 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-certs\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.055254 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-cabundle\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.055304 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaa352-8307-40bd-b8a3-16f2e3088fa4-config-volume\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.055324 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-trusted-ca\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.055349 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-node-bootstrap-token\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.055379 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b48287-9900-438d-b356-0859859a90e8-serving-cert\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.056782 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.057682 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaa352-8307-40bd-b8a3-16f2e3088fa4-config-volume\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.057881 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-cabundle\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.058634 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-trusted-ca\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.059481 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b48287-9900-438d-b356-0859859a90e8-config\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.060744 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b48287-9900-438d-b356-0859859a90e8-serving-cert\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.061480 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-node-bootstrap-token\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.062458 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-signing-key\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.062467 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/1b035abe-aba2-464c-bf93-43ca7da14869-certs\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.102757 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.104562 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g5xl\" (UniqueName: \"kubernetes.io/projected/a82fe90d-0a61-4da5-bfac-7c69f6a59339-kube-api-access-4g5xl\") pod \"openshift-config-operator-7777fb866f-zprl5\" (UID: \"a82fe90d-0a61-4da5-bfac-7c69f6a59339\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.137427 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.143597 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.145337 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szxrj\" (UniqueName: \"kubernetes.io/projected/e271bec2-e43e-4236-a3d5-024b55665af9-kube-api-access-szxrj\") pod \"authentication-operator-69f744f599-859tc\" (UID: \"e271bec2-e43e-4236-a3d5-024b55665af9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.160791 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.184607 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvpf\" (UniqueName: \"kubernetes.io/projected/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-kube-api-access-hqvpf\") pod \"route-controller-manager-6576b87f9c-428lj\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.206735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f4dad69-2226-4d55-95af-7fe3b3cfaf41-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xzjrl\" (UID: \"3f4dad69-2226-4d55-95af-7fe3b3cfaf41\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.211639 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.234550 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wn9g\" (UniqueName: \"kubernetes.io/projected/48206a38-ab99-4560-9cf1-6a260f52c37d-kube-api-access-5wn9g\") pod \"openshift-controller-manager-operator-756b6f6bc6-tnjwp\" (UID: \"48206a38-ab99-4560-9cf1-6a260f52c37d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.259366 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mvnd\" (UniqueName: \"kubernetes.io/projected/bcacc8cc-dbaa-4826-9939-0af3e0cc3297-kube-api-access-5mvnd\") pod \"migrator-59844c95c7-ts2rv\" (UID: \"bcacc8cc-dbaa-4826-9939-0af3e0cc3297\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.261822 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dws\" (UniqueName: \"kubernetes.io/projected/c009d452-642e-47de-994c-cc6e0af791f9-kube-api-access-d8dws\") pod \"marketplace-operator-79b997595-qhgrn\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.262791 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.277229 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.300677 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4tmk\" (UniqueName: \"kubernetes.io/projected/9e475ebe-1a52-4423-bc08-5c659c92f7e8-kube-api-access-b4tmk\") pod \"etcd-operator-b45778765-6qfnr\" (UID: \"9e475ebe-1a52-4423-bc08-5c659c92f7e8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.319311 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2smt\" (UniqueName: \"kubernetes.io/projected/bbe422dd-b194-4dd1-a363-c9cbe72971f9-kube-api-access-w2smt\") pod \"dns-operator-744455d44c-t5kh8\" (UID: \"bbe422dd-b194-4dd1-a363-c9cbe72971f9\") " pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.342742 4796 request.go:700] Waited for 1.920281785s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.343723 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr7ff\" (UniqueName: \"kubernetes.io/projected/f2e2d019-7fb7-4f75-81ee-b20a700c8f0b-kube-api-access-xr7ff\") pod \"downloads-7954f5f757-ltfqp\" (UID: \"f2e2d019-7fb7-4f75-81ee-b20a700c8f0b\") " pod="openshift-console/downloads-7954f5f757-ltfqp" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.349587 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.364125 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a982bd3f-5096-4ddc-9a62-9cec039757e1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.378672 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lr4m\" (UniqueName: \"kubernetes.io/projected/7dbac4dd-abc3-4fe1-9b4c-2106cf81684e-kube-api-access-2lr4m\") pod \"kube-storage-version-migrator-operator-b67b599dd-hs9jk\" (UID: \"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.407040 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ftsm\" (UniqueName: \"kubernetes.io/projected/a982bd3f-5096-4ddc-9a62-9cec039757e1-kube-api-access-7ftsm\") pod \"cluster-image-registry-operator-dc59b4c8b-4tgsj\" (UID: \"a982bd3f-5096-4ddc-9a62-9cec039757e1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.408762 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.431463 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccqc7\" (UniqueName: \"kubernetes.io/projected/a08326b5-7ad1-43ef-987c-86b61eeade10-kube-api-access-ccqc7\") pod \"openshift-apiserver-operator-796bbdcf4f-7hj88\" (UID: \"a08326b5-7ad1-43ef-987c-86b61eeade10\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.443822 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cx9d\" (UniqueName: \"kubernetes.io/projected/4bb970b4-43c2-46e2-b707-20145a03a2bb-kube-api-access-6cx9d\") pod \"control-plane-machine-set-operator-78cbb6b69f-pd9lg\" (UID: \"4bb970b4-43c2-46e2-b707-20145a03a2bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.460886 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23a02d37-8e7f-4855-a203-a3d9865cdd3b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-6gn8b\" (UID: \"23a02d37-8e7f-4855-a203-a3d9865cdd3b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.472724 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zprl5"] Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.473197 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.475318 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.478296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfsv8\" (UniqueName: \"kubernetes.io/projected/24ab0c5d-3e8a-4f6a-8060-882262c28888-kube-api-access-lfsv8\") pod \"machine-approver-56656f9798-nh9gk\" (UID: \"24ab0c5d-3e8a-4f6a-8060-882262c28888\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.484078 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ltfqp" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.492068 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.499449 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" Jan 27 06:48:47 crc kubenswrapper[4796]: W0127 06:48:47.502206 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda82fe90d_0a61_4da5_bfac_7c69f6a59339.slice/crio-d5ee0d505d38ac4acda855c75e78d4321766730cfdc4135d2854bbab8ff820e6 WatchSource:0}: Error finding container d5ee0d505d38ac4acda855c75e78d4321766730cfdc4135d2854bbab8ff820e6: Status 404 returned error can't find the container with id d5ee0d505d38ac4acda855c75e78d4321766730cfdc4135d2854bbab8ff820e6 Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.505110 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-859tc"] Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.506321 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x28kc\" (UniqueName: \"kubernetes.io/projected/5d1e76c1-ba8e-41b1-a312-0670b06bc59f-kube-api-access-x28kc\") pod \"multus-admission-controller-857f4d67dd-6g4p8\" (UID: \"5d1e76c1-ba8e-41b1-a312-0670b06bc59f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.506601 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.519888 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl"] Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.521518 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.526254 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9gmn\" (UniqueName: \"kubernetes.io/projected/677d7062-8915-454b-aff0-999ba539d454-kube-api-access-m9gmn\") pod \"packageserver-d55dfcdfc-rzsch\" (UID: \"677d7062-8915-454b-aff0-999ba539d454\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.538530 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9674\" (UniqueName: \"kubernetes.io/projected/3b10ebe2-f762-4ac3-a59a-76aa1256bdf7-kube-api-access-h9674\") pod \"apiserver-7bbb656c7d-pxnl2\" (UID: \"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.548912 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" event={"ID":"e271bec2-e43e-4236-a3d5-024b55665af9","Type":"ContainerStarted","Data":"227a7a69e4ab26176dca1fa2147ed11ebddba0c6931f57b0f0ac3faa3afd7d26"} Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.550257 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" event={"ID":"a82fe90d-0a61-4da5-bfac-7c69f6a59339","Type":"ContainerStarted","Data":"d5ee0d505d38ac4acda855c75e78d4321766730cfdc4135d2854bbab8ff820e6"} Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.556449 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qhgrn"] Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.561350 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb7x2\" (UniqueName: \"kubernetes.io/projected/0b9441e9-83fe-42de-92a2-71efc3550ac1-kube-api-access-qb7x2\") pod \"olm-operator-6b444d44fb-ts729\" (UID: \"0b9441e9-83fe-42de-92a2-71efc3550ac1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.579123 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stw2d\" (UniqueName: \"kubernetes.io/projected/ff492068-85f5-4775-b13f-2604432a063e-kube-api-access-stw2d\") pod \"catalog-operator-68c6474976-d75v6\" (UID: \"ff492068-85f5-4775-b13f-2604432a063e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.582758 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.583368 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.588915 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.595069 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.601492 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.604518 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 06:48:47 crc kubenswrapper[4796]: W0127 06:48:47.618101 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc009d452_642e_47de_994c_cc6e0af791f9.slice/crio-afc806a2c32b84b87e7c7ff21145c9d82d640a01d01d6af6a080990d60659c33 WatchSource:0}: Error finding container afc806a2c32b84b87e7c7ff21145c9d82d640a01d01d6af6a080990d60659c33: Status 404 returned error can't find the container with id afc806a2c32b84b87e7c7ff21145c9d82d640a01d01d6af6a080990d60659c33 Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.618215 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.624131 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.628818 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.644116 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.675042 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.682484 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.688266 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv"] Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.704254 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.718418 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.748046 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b"] Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.755982 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghgz4\" (UniqueName: \"kubernetes.io/projected/56d7f37b-05cc-4a36-b844-423465e79e8e-kube-api-access-ghgz4\") pod \"oauth-openshift-558db77b4-bxmhd\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.761330 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbs75\" (UniqueName: \"kubernetes.io/projected/bf5d3de7-2818-4aa8-9c56-81244d431713-kube-api-access-vbs75\") pod \"apiserver-76f77b778f-m8qq4\" (UID: \"bf5d3de7-2818-4aa8-9c56-81244d431713\") " pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.781288 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc6lv\" (UniqueName: \"kubernetes.io/projected/34e19426-cb00-4e09-9933-57a015735c77-kube-api-access-zc6lv\") pod \"cluster-samples-operator-665b6dd947-bzdxk\" (UID: \"34e19426-cb00-4e09-9933-57a015735c77\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.805604 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj"] Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.820068 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t5kh8"] Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.821422 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bz49\" (UniqueName: \"kubernetes.io/projected/98eaa352-8307-40bd-b8a3-16f2e3088fa4-kube-api-access-8bz49\") pod \"collect-profiles-29491605-kd67n\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.821812 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5bms\" (UniqueName: \"kubernetes.io/projected/192cd808-feeb-4944-a1b3-99109ea0928e-kube-api-access-v5bms\") pod \"controller-manager-879f6c89f-mr9hz\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.838098 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swjv\" (UniqueName: \"kubernetes.io/projected/e05300a3-a8c8-495f-a4fd-79f326ce0d73-kube-api-access-5swjv\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:47 crc kubenswrapper[4796]: W0127 06:48:47.857021 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23a02d37_8e7f_4855_a203_a3d9865cdd3b.slice/crio-92858b420a4b278a0815f2a164aa518b454d9a1fedb5f7d8ac1534e28172d60e WatchSource:0}: Error finding container 92858b420a4b278a0815f2a164aa518b454d9a1fedb5f7d8ac1534e28172d60e: Status 404 returned error can't find the container with id 92858b420a4b278a0815f2a164aa518b454d9a1fedb5f7d8ac1534e28172d60e Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.858758 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr89j\" (UniqueName: \"kubernetes.io/projected/d2b48287-9900-438d-b356-0859859a90e8-kube-api-access-hr89j\") pod \"console-operator-58897d9998-4ddms\" (UID: \"d2b48287-9900-438d-b356-0859859a90e8\") " pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.858811 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk"] Jan 27 06:48:47 crc kubenswrapper[4796]: W0127 06:48:47.860826 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bb7a5d9_b5b4_41b6_ac58_a33656c85de0.slice/crio-8e0f3ed8bd2c7e20f6421ea32ae795190bd247b0a4587a9522357db1dea6ad47 WatchSource:0}: Error finding container 8e0f3ed8bd2c7e20f6421ea32ae795190bd247b0a4587a9522357db1dea6ad47: Status 404 returned error can't find the container with id 8e0f3ed8bd2c7e20f6421ea32ae795190bd247b0a4587a9522357db1dea6ad47 Jan 27 06:48:47 crc kubenswrapper[4796]: W0127 06:48:47.871382 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbe422dd_b194_4dd1_a363_c9cbe72971f9.slice/crio-c13c8e54bd95cb5e8a7410b5f5e184a1c6f0d873743f741dd0e1ab1263a4382f WatchSource:0}: Error finding container c13c8e54bd95cb5e8a7410b5f5e184a1c6f0d873743f741dd0e1ab1263a4382f: Status 404 returned error can't find the container with id c13c8e54bd95cb5e8a7410b5f5e184a1c6f0d873743f741dd0e1ab1263a4382f Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.871684 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.877014 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kxts\" (UniqueName: \"kubernetes.io/projected/753527d9-7500-4526-be72-3306582c5f7d-kube-api-access-4kxts\") pod \"service-ca-operator-777779d784-nwtjc\" (UID: \"753527d9-7500-4526-be72-3306582c5f7d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:47 crc kubenswrapper[4796]: W0127 06:48:47.896276 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dbac4dd_abc3_4fe1_9b4c_2106cf81684e.slice/crio-261c8bb9d20e9d36f8e4312cddae9ce6f9e3fea8b509721a010954374bbc543f WatchSource:0}: Error finding container 261c8bb9d20e9d36f8e4312cddae9ce6f9e3fea8b509721a010954374bbc543f: Status 404 returned error can't find the container with id 261c8bb9d20e9d36f8e4312cddae9ce6f9e3fea8b509721a010954374bbc543f Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.901549 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px6rg\" (UniqueName: \"kubernetes.io/projected/00061f00-b799-407e-8b71-30de57b92847-kube-api-access-px6rg\") pod \"console-f9d7485db-r6xbk\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.907483 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.921165 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85sb\" (UniqueName: \"kubernetes.io/projected/28c00736-23cd-46f9-b656-94a11f71a470-kube-api-access-b85sb\") pod \"machine-config-controller-84d6567774-pkjfq\" (UID: \"28c00736-23cd-46f9-b656-94a11f71a470\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.981200 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/55c59494-3868-4467-a864-52b1b3385b5e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrzb2\" (UID: \"55c59494-3868-4467-a864-52b1b3385b5e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.981487 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.981907 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.982392 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:47 crc kubenswrapper[4796]: I0127 06:48:47.989032 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.005031 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.006473 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nthlh\" (UniqueName: \"kubernetes.io/projected/ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f-kube-api-access-nthlh\") pod \"service-ca-9c57cc56f-2ggnl\" (UID: \"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f\") " pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.010701 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.011219 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqcq\" (UniqueName: \"kubernetes.io/projected/884fd52c-4588-455a-a5b7-b333dc17aa3c-kube-api-access-vhqcq\") pod \"package-server-manager-789f6589d5-sk2nz\" (UID: \"884fd52c-4588-455a-a5b7-b333dc17aa3c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.015198 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e05300a3-a8c8-495f-a4fd-79f326ce0d73-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lf8gr\" (UID: \"e05300a3-a8c8-495f-a4fd-79f326ce0d73\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.023351 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb557\" (UniqueName: \"kubernetes.io/projected/73db7fe9-d1e0-44f1-88fa-a186dec7a4b0-kube-api-access-qb557\") pod \"machine-config-operator-74547568cd-9t47f\" (UID: \"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.025729 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.036060 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.050327 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.051163 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7l8\" (UniqueName: \"kubernetes.io/projected/68d73a51-598c-41e8-9064-3942bd4f93df-kube-api-access-kx7l8\") pod \"machine-api-operator-5694c8668f-46nhj\" (UID: \"68d73a51-598c-41e8-9064-3942bd4f93df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.068237 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tqkz\" (UniqueName: \"kubernetes.io/projected/1b035abe-aba2-464c-bf93-43ca7da14869-kube-api-access-9tqkz\") pod \"machine-config-server-r49vt\" (UID: \"1b035abe-aba2-464c-bf93-43ca7da14869\") " pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.093431 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.144349 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ltfqp"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.157200 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88"] Jan 27 06:48:48 crc kubenswrapper[4796]: W0127 06:48:48.184092 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bb970b4_43c2_46e2_b707_20145a03a2bb.slice/crio-9eadf4515a9b9216e390b74f8c49039f2a6466c5185e39c5e0fb90114fdb07ef WatchSource:0}: Error finding container 9eadf4515a9b9216e390b74f8c49039f2a6466c5185e39c5e0fb90114fdb07ef: Status 404 returned error can't find the container with id 9eadf4515a9b9216e390b74f8c49039f2a6466c5185e39c5e0fb90114fdb07ef Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.187335 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/663b211e-0671-47e8-ae93-5c177e7a21eb-metrics-certs\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.187474 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/663b211e-0671-47e8-ae93-5c177e7a21eb-default-certificate\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.187897 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-trusted-ca\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.187951 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663b211e-0671-47e8-ae93-5c177e7a21eb-service-ca-bundle\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.188349 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-6qfnr"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.189619 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfcfx\" (UniqueName: \"kubernetes.io/projected/663b211e-0671-47e8-ae93-5c177e7a21eb-kube-api-access-dfcfx\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.189698 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.189731 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6njl\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-kube-api-access-r6njl\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.190252 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-certificates\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.192154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-bound-sa-token\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.194407 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.195317 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.196299 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/663b211e-0671-47e8-ae93-5c177e7a21eb-stats-auth\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.197280 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.197744 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-tls\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.199038 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.200615 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:48.700588218 +0000 UTC m=+149.807555715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.205868 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.223262 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.227007 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.247055 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.249451 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.258086 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.298754 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.305466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.305744 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:48.80570502 +0000 UTC m=+149.912672347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.305837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/663b211e-0671-47e8-ae93-5c177e7a21eb-metrics-certs\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/663b211e-0671-47e8-ae93-5c177e7a21eb-default-certificate\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306222 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-trusted-ca\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306266 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663b211e-0671-47e8-ae93-5c177e7a21eb-service-ca-bundle\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306302 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfcfx\" (UniqueName: \"kubernetes.io/projected/663b211e-0671-47e8-ae93-5c177e7a21eb-kube-api-access-dfcfx\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306336 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306360 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6njl\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-kube-api-access-r6njl\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306423 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-csi-data-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306477 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-certificates\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306551 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkm7r\" (UniqueName: \"kubernetes.io/projected/1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d-kube-api-access-xkm7r\") pod \"ingress-canary-5fxsd\" (UID: \"1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d\") " pod="openshift-ingress-canary/ingress-canary-5fxsd" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306577 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3d76e7c-c94d-4b43-82cf-98ca9dc9c019-metrics-tls\") pod \"dns-default-q644h\" (UID: \"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019\") " pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306735 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-bound-sa-token\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306783 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zhv\" (UniqueName: \"kubernetes.io/projected/276663de-ff96-4732-911a-fae0b469545e-kube-api-access-j9zhv\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306848 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-socket-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306881 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306897 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3d76e7c-c94d-4b43-82cf-98ca9dc9c019-config-volume\") pod \"dns-default-q644h\" (UID: \"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019\") " pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306933 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-plugins-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306952 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/663b211e-0671-47e8-ae93-5c177e7a21eb-stats-auth\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306976 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d-cert\") pod \"ingress-canary-5fxsd\" (UID: \"1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d\") " pod="openshift-ingress-canary/ingress-canary-5fxsd" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.306997 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-registration-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.307023 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-tls\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.307351 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn6qw\" (UniqueName: \"kubernetes.io/projected/e3d76e7c-c94d-4b43-82cf-98ca9dc9c019-kube-api-access-wn6qw\") pod \"dns-default-q644h\" (UID: \"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019\") " pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.307390 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.307431 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-mountpoint-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.310074 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-certificates\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.311802 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:48.811781379 +0000 UTC m=+149.918748706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.311845 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/663b211e-0671-47e8-ae93-5c177e7a21eb-service-ca-bundle\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.313590 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-trusted-ca\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.317497 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.318134 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/663b211e-0671-47e8-ae93-5c177e7a21eb-metrics-certs\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.319424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-tls\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.319925 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: W0127 06:48:48.321090 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e475ebe_1a52_4423_bc08_5c659c92f7e8.slice/crio-b57ff87957058ece7c03e1eabae151b705dc9b798edb18589dbc3c3d82097920 WatchSource:0}: Error finding container b57ff87957058ece7c03e1eabae151b705dc9b798edb18589dbc3c3d82097920: Status 404 returned error can't find the container with id b57ff87957058ece7c03e1eabae151b705dc9b798edb18589dbc3c3d82097920 Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.322360 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/663b211e-0671-47e8-ae93-5c177e7a21eb-default-certificate\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.323912 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/663b211e-0671-47e8-ae93-5c177e7a21eb-stats-auth\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.343211 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6njl\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-kube-api-access-r6njl\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.362098 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-bound-sa-token\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.369028 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-r49vt" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.380040 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfcfx\" (UniqueName: \"kubernetes.io/projected/663b211e-0671-47e8-ae93-5c177e7a21eb-kube-api-access-dfcfx\") pod \"router-default-5444994796-qwn64\" (UID: \"663b211e-0671-47e8-ae93-5c177e7a21eb\") " pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.413550 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.413839 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:48.91379863 +0000 UTC m=+150.020765967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.413908 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-csi-data-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.413996 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkm7r\" (UniqueName: \"kubernetes.io/projected/1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d-kube-api-access-xkm7r\") pod \"ingress-canary-5fxsd\" (UID: \"1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d\") " pod="openshift-ingress-canary/ingress-canary-5fxsd" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414030 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3d76e7c-c94d-4b43-82cf-98ca9dc9c019-metrics-tls\") pod \"dns-default-q644h\" (UID: \"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019\") " pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414083 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9zhv\" (UniqueName: \"kubernetes.io/projected/276663de-ff96-4732-911a-fae0b469545e-kube-api-access-j9zhv\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414119 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-socket-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414157 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3d76e7c-c94d-4b43-82cf-98ca9dc9c019-config-volume\") pod \"dns-default-q644h\" (UID: \"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019\") " pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414198 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-plugins-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414236 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d-cert\") pod \"ingress-canary-5fxsd\" (UID: \"1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d\") " pod="openshift-ingress-canary/ingress-canary-5fxsd" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-registration-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414314 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn6qw\" (UniqueName: \"kubernetes.io/projected/e3d76e7c-c94d-4b43-82cf-98ca9dc9c019-kube-api-access-wn6qw\") pod \"dns-default-q644h\" (UID: \"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019\") " pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414383 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-mountpoint-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.414606 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-mountpoint-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.415021 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-csi-data-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.415293 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-plugins-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.415314 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3d76e7c-c94d-4b43-82cf-98ca9dc9c019-config-volume\") pod \"dns-default-q644h\" (UID: \"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019\") " pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.415388 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-registration-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.415435 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/276663de-ff96-4732-911a-fae0b469545e-socket-dir\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.415886 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:48.915863574 +0000 UTC m=+150.022830901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.443813 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e3d76e7c-c94d-4b43-82cf-98ca9dc9c019-metrics-tls\") pod \"dns-default-q644h\" (UID: \"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019\") " pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.445106 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkm7r\" (UniqueName: \"kubernetes.io/projected/1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d-kube-api-access-xkm7r\") pod \"ingress-canary-5fxsd\" (UID: \"1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d\") " pod="openshift-ingress-canary/ingress-canary-5fxsd" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.451773 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d-cert\") pod \"ingress-canary-5fxsd\" (UID: \"1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d\") " pod="openshift-ingress-canary/ingress-canary-5fxsd" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.483667 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9zhv\" (UniqueName: \"kubernetes.io/projected/276663de-ff96-4732-911a-fae0b469545e-kube-api-access-j9zhv\") pod \"csi-hostpathplugin-69rqv\" (UID: \"276663de-ff96-4732-911a-fae0b469545e\") " pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.502753 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn6qw\" (UniqueName: \"kubernetes.io/projected/e3d76e7c-c94d-4b43-82cf-98ca9dc9c019-kube-api-access-wn6qw\") pod \"dns-default-q644h\" (UID: \"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019\") " pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.508638 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.513412 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.515020 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4ddms"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.515309 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.515790 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.01575464 +0000 UTC m=+150.122721967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.533570 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6g4p8"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.535514 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2"] Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.559827 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" event={"ID":"e271bec2-e43e-4236-a3d5-024b55665af9","Type":"ContainerStarted","Data":"5c881bd82e4a569698a866a5deba267faec293e5c56ca5752c7f2525cb892a70"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.572341 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" event={"ID":"23a02d37-8e7f-4855-a203-a3d9865cdd3b","Type":"ContainerStarted","Data":"ecb17a12bf75642bdc60604693f69fb779daeb8d48938c8f4db95dcf8c82e20d"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.572403 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" event={"ID":"23a02d37-8e7f-4855-a203-a3d9865cdd3b","Type":"ContainerStarted","Data":"92858b420a4b278a0815f2a164aa518b454d9a1fedb5f7d8ac1534e28172d60e"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.584355 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv" event={"ID":"bcacc8cc-dbaa-4826-9939-0af3e0cc3297","Type":"ContainerStarted","Data":"d509159ea8d1862532847f4b44f6f3895e6e9cfaaf00b9ab76376c42ea7af6d0"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.584422 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv" event={"ID":"bcacc8cc-dbaa-4826-9939-0af3e0cc3297","Type":"ContainerStarted","Data":"35bc0a614142193525d77c7768304bbae2b3a2468992963abd1773166d86ee84"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.587199 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" event={"ID":"3f4dad69-2226-4d55-95af-7fe3b3cfaf41","Type":"ContainerStarted","Data":"17c639828a02082187d1335faabbec7c4c8ec922c6218468574d8c70873007b7"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.587247 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" event={"ID":"3f4dad69-2226-4d55-95af-7fe3b3cfaf41","Type":"ContainerStarted","Data":"b55a79bc0f23b1061fbb0e5d49ec63c67a75260044e817b3f50a37ae77c34c4f"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.590235 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" event={"ID":"c009d452-642e-47de-994c-cc6e0af791f9","Type":"ContainerStarted","Data":"e9ee7ef0fe9e8891ad3a78a721d86719309131587f256048558d231cbe5943e1"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.590289 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" event={"ID":"c009d452-642e-47de-994c-cc6e0af791f9","Type":"ContainerStarted","Data":"afc806a2c32b84b87e7c7ff21145c9d82d640a01d01d6af6a080990d60659c33"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.590565 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.591970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" event={"ID":"ff492068-85f5-4775-b13f-2604432a063e","Type":"ContainerStarted","Data":"f30b3b45086f824d6987258a02e1079d74bb8f1186a80d52e62e6e4f55b9b2d6"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.615841 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qhgrn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.616389 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" podUID="c009d452-642e-47de-994c-cc6e0af791f9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.15:8080/healthz\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.616015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" event={"ID":"48206a38-ab99-4560-9cf1-6a260f52c37d","Type":"ContainerStarted","Data":"e3be9a8a949c468f49a5c7706b4d4e08b0c0baf11248be39dc37755943f2c0bd"} Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.620036 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.12000848 +0000 UTC m=+150.226975807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.617525 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.636128 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.646551 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" event={"ID":"677d7062-8915-454b-aff0-999ba539d454","Type":"ContainerStarted","Data":"fce9a4be391a5efa749f6ad0027d2b8ed177e2cad9b0058c6c2ab2b06de9bfd1"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.648876 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" event={"ID":"bbe422dd-b194-4dd1-a363-c9cbe72971f9","Type":"ContainerStarted","Data":"c13c8e54bd95cb5e8a7410b5f5e184a1c6f0d873743f741dd0e1ab1263a4382f"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.660116 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" event={"ID":"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0","Type":"ContainerStarted","Data":"4d284c480f5fad2645273a62e245e43f50a3a8013a3fa00d9f985fc261bf607a"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.660145 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" event={"ID":"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0","Type":"ContainerStarted","Data":"8e0f3ed8bd2c7e20f6421ea32ae795190bd247b0a4587a9522357db1dea6ad47"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.662292 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.664133 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ltfqp" event={"ID":"f2e2d019-7fb7-4f75-81ee-b20a700c8f0b","Type":"ContainerStarted","Data":"e0d864399fe70fd9809f9275e0d3589889b8946b8bb492a27da9e005ed5db706"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.667484 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" event={"ID":"4bb970b4-43c2-46e2-b707-20145a03a2bb","Type":"ContainerStarted","Data":"9eadf4515a9b9216e390b74f8c49039f2a6466c5185e39c5e0fb90114fdb07ef"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.668677 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" event={"ID":"9e475ebe-1a52-4423-bc08-5c659c92f7e8","Type":"ContainerStarted","Data":"b57ff87957058ece7c03e1eabae151b705dc9b798edb18589dbc3c3d82097920"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.669998 4796 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-428lj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.670069 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" podUID="1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.671317 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" event={"ID":"24ab0c5d-3e8a-4f6a-8060-882262c28888","Type":"ContainerStarted","Data":"64d7d16be715806b501c1cac9153c616187f7a6c1257a651f5a8eb80a3fa7741"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.676049 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" event={"ID":"0b9441e9-83fe-42de-92a2-71efc3550ac1","Type":"ContainerStarted","Data":"d9119523bd0928487a380174b93739fcbd16d85c6f650d94a3413ba9b0dd62bf"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.677907 4796 generic.go:334] "Generic (PLEG): container finished" podID="a82fe90d-0a61-4da5-bfac-7c69f6a59339" containerID="1b86d76e739064e473c82205b5f5922cf3aa4e91474d2d487a593539f81deaf6" exitCode=0 Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.678475 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" event={"ID":"a82fe90d-0a61-4da5-bfac-7c69f6a59339","Type":"ContainerDied","Data":"1b86d76e739064e473c82205b5f5922cf3aa4e91474d2d487a593539f81deaf6"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.682665 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" event={"ID":"a08326b5-7ad1-43ef-987c-86b61eeade10","Type":"ContainerStarted","Data":"97a2501940e63b77e3b30d16fdc724f1e082fd5a8fd17c814fef82c18b851676"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.685946 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" event={"ID":"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e","Type":"ContainerStarted","Data":"95fe4329d73b0df764a4f4daf967fc657535a5b06ed3309b7aca6571e8f9f5f3"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.687278 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" event={"ID":"7dbac4dd-abc3-4fe1-9b4c-2106cf81684e","Type":"ContainerStarted","Data":"261c8bb9d20e9d36f8e4312cddae9ce6f9e3fea8b509721a010954374bbc543f"} Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.692407 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-69rqv" Jan 27 06:48:48 crc kubenswrapper[4796]: W0127 06:48:48.697688 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda982bd3f_5096_4ddc_9a62_9cec039757e1.slice/crio-1a63b0f9afb705e209dd92abeacb1be2943c5404c325dab4530f415bd490c6a1 WatchSource:0}: Error finding container 1a63b0f9afb705e209dd92abeacb1be2943c5404c325dab4530f415bd490c6a1: Status 404 returned error can't find the container with id 1a63b0f9afb705e209dd92abeacb1be2943c5404c325dab4530f415bd490c6a1 Jan 27 06:48:48 crc kubenswrapper[4796]: W0127 06:48:48.698651 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b48287_9900_438d_b356_0859859a90e8.slice/crio-8fbb835defd930ac1af5a517a7429d941461a46025aed256281939db0530ebef WatchSource:0}: Error finding container 8fbb835defd930ac1af5a517a7429d941461a46025aed256281939db0530ebef: Status 404 returned error can't find the container with id 8fbb835defd930ac1af5a517a7429d941461a46025aed256281939db0530ebef Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.701603 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-q644h" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.708397 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5fxsd" Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.731167 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.733170 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.23314268 +0000 UTC m=+150.340110047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:48 crc kubenswrapper[4796]: W0127 06:48:48.817705 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b035abe_aba2_464c_bf93_43ca7da14869.slice/crio-b7c63a7f2d50ae5dec908a905acbf464cbbf0e755be4d548067326f58dab466a WatchSource:0}: Error finding container b7c63a7f2d50ae5dec908a905acbf464cbbf0e755be4d548067326f58dab466a: Status 404 returned error can't find the container with id b7c63a7f2d50ae5dec908a905acbf464cbbf0e755be4d548067326f58dab466a Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.848181 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.848644 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.348630784 +0000 UTC m=+150.455598111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:48 crc kubenswrapper[4796]: I0127 06:48:48.949467 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:48 crc kubenswrapper[4796]: E0127 06:48:48.949850 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.449833374 +0000 UTC m=+150.556800701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.051620 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:49 crc kubenswrapper[4796]: E0127 06:48:49.051986 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.551969168 +0000 UTC m=+150.658936495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.153761 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:49 crc kubenswrapper[4796]: E0127 06:48:49.154333 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.654314808 +0000 UTC m=+150.761282135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.261330 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:49 crc kubenswrapper[4796]: E0127 06:48:49.263440 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.763422505 +0000 UTC m=+150.870389852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.363212 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:49 crc kubenswrapper[4796]: E0127 06:48:49.363684 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.86365898 +0000 UTC m=+150.970626307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.472893 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.475669 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:49 crc kubenswrapper[4796]: E0127 06:48:49.476708 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:49.976694189 +0000 UTC m=+151.083661516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.570931 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.582912 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:49 crc kubenswrapper[4796]: E0127 06:48:49.583315 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:50.08329937 +0000 UTC m=+151.190266697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.587621 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bxmhd"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.587702 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-r6xbk"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.627942 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mr9hz"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.637527 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-m8qq4"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.653060 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.668613 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.670590 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.684710 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.703834 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2ggnl"] Jan 27 06:48:49 crc kubenswrapper[4796]: E0127 06:48:49.704967 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:50.204945884 +0000 UTC m=+151.311913211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.791658 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:49 crc kubenswrapper[4796]: E0127 06:48:49.796968 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:50.296923322 +0000 UTC m=+151.403890649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.803785 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f"] Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.809744 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" event={"ID":"a08326b5-7ad1-43ef-987c-86b61eeade10","Type":"ContainerStarted","Data":"e26c2bb57e6b081789fa7fa452d58187cdd1d7cf6ac7cf5ba89b476f637db15a"} Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.884361 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" podStartSLOduration=122.88419954 podStartE2EDuration="2m2.88419954s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:49.883610814 +0000 UTC m=+150.990578141" watchObservedRunningTime="2026-01-27 06:48:49.88419954 +0000 UTC m=+150.991166867" Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.898972 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.899785 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq"] Jan 27 06:48:49 crc kubenswrapper[4796]: E0127 06:48:49.901004 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:50.400847523 +0000 UTC m=+151.507814850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.942691 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" event={"ID":"48206a38-ab99-4560-9cf1-6a260f52c37d","Type":"ContainerStarted","Data":"ae3ebfebe42a6d065df3d1672dfef5eab3a56e66e155babc87596aad97e14396"} Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.973117 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" event={"ID":"55c59494-3868-4467-a864-52b1b3385b5e","Type":"ContainerStarted","Data":"28292d801688809f39b322231d978b754ae96266d313e046aaf6147c67f78837"} Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.988378 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-6gn8b" podStartSLOduration=122.988360707 podStartE2EDuration="2m2.988360707s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:49.952275715 +0000 UTC m=+151.059243042" watchObservedRunningTime="2026-01-27 06:48:49.988360707 +0000 UTC m=+151.095328024" Jan 27 06:48:49 crc kubenswrapper[4796]: I0127 06:48:49.989407 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hs9jk" podStartSLOduration=122.989401654 podStartE2EDuration="2m2.989401654s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:49.9892415 +0000 UTC m=+151.096208827" watchObservedRunningTime="2026-01-27 06:48:49.989401654 +0000 UTC m=+151.096368981" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:49.999960 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.001064 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:50.501031317 +0000 UTC m=+151.607998644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.023049 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" podStartSLOduration=123.023026331 podStartE2EDuration="2m3.023026331s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.022306752 +0000 UTC m=+151.129274089" watchObservedRunningTime="2026-01-27 06:48:50.023026331 +0000 UTC m=+151.129993658" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.045824 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" event={"ID":"bbe422dd-b194-4dd1-a363-c9cbe72971f9","Type":"ContainerStarted","Data":"08ee657def6bdd83f92338a86ae6170d14eaf7a02e0ec165e0bde9ea87ca7f06"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.074193 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-859tc" podStartSLOduration=124.074167335 podStartE2EDuration="2m4.074167335s" podCreationTimestamp="2026-01-27 06:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.064839692 +0000 UTC m=+151.171807009" watchObservedRunningTime="2026-01-27 06:48:50.074167335 +0000 UTC m=+151.181134662" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.075995 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ltfqp" event={"ID":"f2e2d019-7fb7-4f75-81ee-b20a700c8f0b","Type":"ContainerStarted","Data":"96523432b2c29342699d3a3589972b8d9fd8d110b52ebb400ef0f3bc70b75e4e"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.076891 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ltfqp" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.090220 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltfqp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.090307 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltfqp" podUID="f2e2d019-7fb7-4f75-81ee-b20a700c8f0b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.105043 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.105468 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:50.605451342 +0000 UTC m=+151.712418669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.111593 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xzjrl" podStartSLOduration=123.111569421 podStartE2EDuration="2m3.111569421s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.107860144 +0000 UTC m=+151.214827471" watchObservedRunningTime="2026-01-27 06:48:50.111569421 +0000 UTC m=+151.218536748" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.114868 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" event={"ID":"a82fe90d-0a61-4da5-bfac-7c69f6a59339","Type":"ContainerStarted","Data":"d72a9080bb97ceb51e8b3cdad29377dbcbb9d7a4fc8539e359c9d0165047bae2"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.117109 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-46nhj"] Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.117194 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.138878 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" event={"ID":"4bb970b4-43c2-46e2-b707-20145a03a2bb","Type":"ContainerStarted","Data":"8b654fbaa1a444c614205ed174ee20d5217171f89fd6544772dda5854539d022"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.140057 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" event={"ID":"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7","Type":"ContainerStarted","Data":"12fbb0f3aec8b76a356b26c83816deb9b186f2a826f55178927ee41f65a29eeb"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.140658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qwn64" event={"ID":"663b211e-0671-47e8-ae93-5c177e7a21eb","Type":"ContainerStarted","Data":"6af4d8e5476a16f5ab03a75c8e15ccadc9a8c9b886583b6626b7495e96a257e4"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.146110 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7hj88" podStartSLOduration=124.146086292 podStartE2EDuration="2m4.146086292s" podCreationTimestamp="2026-01-27 06:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.129397646 +0000 UTC m=+151.236364963" watchObservedRunningTime="2026-01-27 06:48:50.146086292 +0000 UTC m=+151.253053609" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.146795 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-q644h"] Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.152111 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" event={"ID":"0b9441e9-83fe-42de-92a2-71efc3550ac1","Type":"ContainerStarted","Data":"292f8221c5a3e796b33a2df2a678b4403f7437179e02b8283e13880ba9d118ae"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.152997 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-69rqv"] Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.159757 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.167915 4796 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ts729 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.167977 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" podUID="0b9441e9-83fe-42de-92a2-71efc3550ac1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.199067 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tnjwp" podStartSLOduration=123.199037252 podStartE2EDuration="2m3.199037252s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.159120722 +0000 UTC m=+151.266088069" watchObservedRunningTime="2026-01-27 06:48:50.199037252 +0000 UTC m=+151.306004579" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.205140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv" event={"ID":"bcacc8cc-dbaa-4826-9939-0af3e0cc3297","Type":"ContainerStarted","Data":"5459bf25781d9bda1992839a7ba5cabd66927bd1fe458eaeb0883f85d17c0482"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.206184 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.212202 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-pd9lg" podStartSLOduration=123.212182476 podStartE2EDuration="2m3.212182476s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.208312575 +0000 UTC m=+151.315279902" watchObservedRunningTime="2026-01-27 06:48:50.212182476 +0000 UTC m=+151.319149803" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.212578 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ltfqp" podStartSLOduration=123.212574036 podStartE2EDuration="2m3.212574036s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.192112502 +0000 UTC m=+151.299079829" watchObservedRunningTime="2026-01-27 06:48:50.212574036 +0000 UTC m=+151.319541353" Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.212654 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:50.712628937 +0000 UTC m=+151.819596254 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.216370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-r49vt" event={"ID":"1b035abe-aba2-464c-bf93-43ca7da14869","Type":"ContainerStarted","Data":"b7c63a7f2d50ae5dec908a905acbf464cbbf0e755be4d548067326f58dab466a"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.220425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" event={"ID":"a982bd3f-5096-4ddc-9a62-9cec039757e1","Type":"ContainerStarted","Data":"1a63b0f9afb705e209dd92abeacb1be2943c5404c325dab4530f415bd490c6a1"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.221749 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" event={"ID":"5d1e76c1-ba8e-41b1-a312-0670b06bc59f","Type":"ContainerStarted","Data":"7cb405e9b806e5a1c962a08c68d9b201f9d374cfc9116557053eb0c4249e1fce"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.238584 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" podStartSLOduration=123.238571055 podStartE2EDuration="2m3.238571055s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.237711652 +0000 UTC m=+151.344678979" watchObservedRunningTime="2026-01-27 06:48:50.238571055 +0000 UTC m=+151.345538382" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.245304 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" event={"ID":"24ab0c5d-3e8a-4f6a-8060-882262c28888","Type":"ContainerStarted","Data":"276f1a77e9dee5168ca4076ff323f4f16c8cf94a4b0ce89854e81772ae61b8e2"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.290897 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" event={"ID":"ff492068-85f5-4775-b13f-2604432a063e","Type":"ContainerStarted","Data":"de041a002c8ef8ad4f63376393bb62f4cf661bf432c7c4f8cd21a15fd85df949"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.293616 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.307725 4796 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-d75v6 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.307807 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" podUID="ff492068-85f5-4775-b13f-2604432a063e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.319895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.320230 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:50.820218334 +0000 UTC m=+151.927185661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.326354 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" podStartSLOduration=123.326331693 podStartE2EDuration="2m3.326331693s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.290321755 +0000 UTC m=+151.397289082" watchObservedRunningTime="2026-01-27 06:48:50.326331693 +0000 UTC m=+151.433299020" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.328130 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" event={"ID":"677d7062-8915-454b-aff0-999ba539d454","Type":"ContainerStarted","Data":"6c5d9aeb3c2bb39868069b2db82303e7d1130edb8582c26c8830ce985db7a972"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.329014 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.341902 4796 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rzsch container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.341959 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" podUID="677d7062-8915-454b-aff0-999ba539d454" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.349808 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r6xbk" event={"ID":"00061f00-b799-407e-8b71-30de57b92847","Type":"ContainerStarted","Data":"0af13251b54d8f4efa711f3cd8e48ce4e4e7241b1458f77f0566d4092e99d0f3"} Jan 27 06:48:50 crc kubenswrapper[4796]: W0127 06:48:50.359861 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3d76e7c_c94d_4b43_82cf_98ca9dc9c019.slice/crio-7da1b9a75465cbb45fa43bad24169532db9500587bbde7acd97a5d2f2f37e1f6 WatchSource:0}: Error finding container 7da1b9a75465cbb45fa43bad24169532db9500587bbde7acd97a5d2f2f37e1f6: Status 404 returned error can't find the container with id 7da1b9a75465cbb45fa43bad24169532db9500587bbde7acd97a5d2f2f37e1f6 Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.389030 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ts2rv" podStartSLOduration=123.389012039 podStartE2EDuration="2m3.389012039s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.331839937 +0000 UTC m=+151.438807264" watchObservedRunningTime="2026-01-27 06:48:50.389012039 +0000 UTC m=+151.495979366" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.390475 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" podStartSLOduration=123.390462667 podStartE2EDuration="2m3.390462667s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.38792658 +0000 UTC m=+151.494893907" watchObservedRunningTime="2026-01-27 06:48:50.390462667 +0000 UTC m=+151.497429994" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.392176 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4ddms" event={"ID":"d2b48287-9900-438d-b356-0859859a90e8","Type":"ContainerStarted","Data":"8fbb835defd930ac1af5a517a7429d941461a46025aed256281939db0530ebef"} Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.392253 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.426636 4796 patch_prober.go:28] interesting pod/console-operator-58897d9998-4ddms container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.426701 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4ddms" podUID="d2b48287-9900-438d-b356-0859859a90e8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.428443 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.429505 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.430686 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:50.930638255 +0000 UTC m=+152.037605592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.446745 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.463083 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" podStartSLOduration=123.46305538 podStartE2EDuration="2m3.46305538s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.437238197 +0000 UTC m=+151.544205524" watchObservedRunningTime="2026-01-27 06:48:50.46305538 +0000 UTC m=+151.570022707" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.478587 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4ddms" podStartSLOduration=123.478563135 podStartE2EDuration="2m3.478563135s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:50.474634162 +0000 UTC m=+151.581601489" watchObservedRunningTime="2026-01-27 06:48:50.478563135 +0000 UTC m=+151.585530462" Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.535421 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.547756 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.047741889 +0000 UTC m=+152.154709216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.636383 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.639646 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.139622787 +0000 UTC m=+152.246590114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.643475 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.644160 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.144147734 +0000 UTC m=+152.251115061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.645079 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5fxsd"] Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.760136 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.760898 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.26087962 +0000 UTC m=+152.367846947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.863188 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.863708 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.363689242 +0000 UTC m=+152.470656569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:50 crc kubenswrapper[4796]: I0127 06:48:50.963766 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:50 crc kubenswrapper[4796]: E0127 06:48:50.964481 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.464457931 +0000 UTC m=+152.571425258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.066817 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.067211 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.567196181 +0000 UTC m=+152.674163508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.181954 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.188489 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.688461405 +0000 UTC m=+152.795428732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.290148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.290758 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.790744113 +0000 UTC m=+152.897711440 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.395192 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.395637 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:51.895600498 +0000 UTC m=+153.002567815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.445120 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" event={"ID":"a982bd3f-5096-4ddc-9a62-9cec039757e1","Type":"ContainerStarted","Data":"8c1be94feac111dcb7546531021d0da11790c18517e355700004fa1e5dfa6fb2"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.459708 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q644h" event={"ID":"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019","Type":"ContainerStarted","Data":"7da1b9a75465cbb45fa43bad24169532db9500587bbde7acd97a5d2f2f37e1f6"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.491026 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" event={"ID":"9e475ebe-1a52-4423-bc08-5c659c92f7e8","Type":"ContainerStarted","Data":"0c77c661efc03849ac5e9565d678b476dfc55ce70a19ab85be083395340b82ea"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.502879 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.504268 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.004254493 +0000 UTC m=+153.111221820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.527221 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tgsj" podStartSLOduration=124.527191501 podStartE2EDuration="2m4.527191501s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.490167356 +0000 UTC m=+152.597134703" watchObservedRunningTime="2026-01-27 06:48:51.527191501 +0000 UTC m=+152.634158828" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.543435 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" event={"ID":"753527d9-7500-4526-be72-3306582c5f7d","Type":"ContainerStarted","Data":"3d288b4bea60520d1eca47f94c837387dc88d2489d2cbb9a1121010aea94fdcb"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.543503 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" event={"ID":"753527d9-7500-4526-be72-3306582c5f7d","Type":"ContainerStarted","Data":"d26d4e5f19742eda4adf1b1f1f7231a286dc6f01b7fff70bd498bbbc52cd2da0"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.562439 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4ddms" event={"ID":"d2b48287-9900-438d-b356-0859859a90e8","Type":"ContainerStarted","Data":"52ae3fe448f772deecfe0d8193b9f2bf1ff3ef90c035715e9aecd9ffe7fbf628"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.565375 4796 patch_prober.go:28] interesting pod/console-operator-58897d9998-4ddms container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.565437 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4ddms" podUID="d2b48287-9900-438d-b356-0859859a90e8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/readyz\": dial tcp 10.217.0.24:8443: connect: connection refused" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.570155 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-6qfnr" podStartSLOduration=124.570116262 podStartE2EDuration="2m4.570116262s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.529194293 +0000 UTC m=+152.636161620" watchObservedRunningTime="2026-01-27 06:48:51.570116262 +0000 UTC m=+152.677083579" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.576320 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-nwtjc" podStartSLOduration=124.576293262 podStartE2EDuration="2m4.576293262s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.569943737 +0000 UTC m=+152.676911064" watchObservedRunningTime="2026-01-27 06:48:51.576293262 +0000 UTC m=+152.683260579" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.582558 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" event={"ID":"5d1e76c1-ba8e-41b1-a312-0670b06bc59f","Type":"ContainerStarted","Data":"6aabbfbdaf4820d7b7b37a29a974d63d7235caa7c9d3925fabd45d29fe84d3d8"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.603875 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.604087 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.104061607 +0000 UTC m=+153.211028934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.604341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.604820 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.104801506 +0000 UTC m=+153.211768823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.612910 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" event={"ID":"884fd52c-4588-455a-a5b7-b333dc17aa3c","Type":"ContainerStarted","Data":"88210db418ff8b192119e4842cf465d89a2877be8b6dd0ff77b5be827e989a08"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.612973 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" event={"ID":"884fd52c-4588-455a-a5b7-b333dc17aa3c","Type":"ContainerStarted","Data":"8f7811d8f2923f6251cc5c166e5bf3324714f97f688f5dc616ab82b160e1fa41"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.613826 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.622023 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" event={"ID":"55c59494-3868-4467-a864-52b1b3385b5e","Type":"ContainerStarted","Data":"23f9121075d09ee714e07beb08d4e8382b435920f1a959bacd676e6ddf247ef9"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.654359 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" event={"ID":"56d7f37b-05cc-4a36-b844-423465e79e8e","Type":"ContainerStarted","Data":"2bb5a8eef1d0564aad5a180044597f0d9cee7b3fc08c00fe7a8fb7259f634e14"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.655618 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.662431 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" event={"ID":"98eaa352-8307-40bd-b8a3-16f2e3088fa4","Type":"ContainerStarted","Data":"f1e16f192d84820afb841d561acdb5fada7821549f4f91e67a81f36670e7e3c5"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.662490 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" event={"ID":"98eaa352-8307-40bd-b8a3-16f2e3088fa4","Type":"ContainerStarted","Data":"61f04f2e352b169d1dde701943e15dff131c54afc02a557fb203f8e0f238a58c"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.665880 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" podStartSLOduration=124.665851709 podStartE2EDuration="2m4.665851709s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.656039423 +0000 UTC m=+152.763006750" watchObservedRunningTime="2026-01-27 06:48:51.665851709 +0000 UTC m=+152.772819036" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.677370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" event={"ID":"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f","Type":"ContainerStarted","Data":"485f9bb31829b99120a7b54428e8649240fece17baf9f79106d482793bcf0627"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.677868 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" event={"ID":"ee6f1f88-f21e-4b9b-ab68-79d68bd37a7f","Type":"ContainerStarted","Data":"504f0e14c93971e3544082ad867fd82c331b30951313571b6499aaabbe037090"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.680495 4796 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-bxmhd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.40:6443/healthz\": dial tcp 10.217.0.40:6443: connect: connection refused" start-of-body= Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.680567 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" podUID="56d7f37b-05cc-4a36-b844-423465e79e8e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.40:6443/healthz\": dial tcp 10.217.0.40:6443: connect: connection refused" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.682826 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" event={"ID":"bf5d3de7-2818-4aa8-9c56-81244d431713","Type":"ContainerStarted","Data":"36f864689d1b9c1506aebe3588b108f6880b069303b0df3a84e75ed5eca17a99"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.685121 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" event={"ID":"e05300a3-a8c8-495f-a4fd-79f326ce0d73","Type":"ContainerStarted","Data":"3e8c6223b12f298f44814054abcc87e02134a8dbe07656a1b5f4ef54e8d0ba08"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.685160 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" event={"ID":"e05300a3-a8c8-495f-a4fd-79f326ce0d73","Type":"ContainerStarted","Data":"403c3bb24e4296b573e75db8342bad121385d9a09743c19eee2ffbcab1e71644"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.688160 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" event={"ID":"68d73a51-598c-41e8-9064-3942bd4f93df","Type":"ContainerStarted","Data":"54e8198b639833ce28336d6e0e354dc5d167b06aef8a42e36a4bde70a0cc3977"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.688202 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" event={"ID":"68d73a51-598c-41e8-9064-3942bd4f93df","Type":"ContainerStarted","Data":"2e7f9cebd2379f38cea9b657235ec61e4fdf71bedc6c4df6654524fc083471b8"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.697668 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" podStartSLOduration=125.697636258 podStartE2EDuration="2m5.697636258s" podCreationTimestamp="2026-01-27 06:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.686891108 +0000 UTC m=+152.793858435" watchObservedRunningTime="2026-01-27 06:48:51.697636258 +0000 UTC m=+152.804603585" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.708175 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.709918 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.209898568 +0000 UTC m=+153.316865895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.730148 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrzb2" podStartSLOduration=124.730115325 podStartE2EDuration="2m4.730115325s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.718269186 +0000 UTC m=+152.825236513" watchObservedRunningTime="2026-01-27 06:48:51.730115325 +0000 UTC m=+152.837082652" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.737385 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" event={"ID":"34e19426-cb00-4e09-9933-57a015735c77","Type":"ContainerStarted","Data":"f68c59d1a6de77b2cd96cc3f4145aa53454e63f927ea8ad0b8d233b1fa977927"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.737425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" event={"ID":"34e19426-cb00-4e09-9933-57a015735c77","Type":"ContainerStarted","Data":"9e4ff42add389b6cf6629701e51af9613753756dc8d43307d088f6da631a4b09"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.740563 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" event={"ID":"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0","Type":"ContainerStarted","Data":"c32b7f8d4dba4deb603f64a2836b9e5a0bd28dede442f2544019143474cb291e"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.740622 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" event={"ID":"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0","Type":"ContainerStarted","Data":"b0a26f36512a6503a05344f9d7a0e75974b8ae7ec3fefa2c0936ffaca059eea1"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.745658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" event={"ID":"24ab0c5d-3e8a-4f6a-8060-882262c28888","Type":"ContainerStarted","Data":"f18bb25028b3c4c1ab94c01465e35e44bb102c1f7eabd113ac33e29fd18e6f39"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.764890 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r6xbk" event={"ID":"00061f00-b799-407e-8b71-30de57b92847","Type":"ContainerStarted","Data":"c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.844717 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.854698 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" podStartSLOduration=124.854667455 podStartE2EDuration="2m4.854667455s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.775492369 +0000 UTC m=+152.882459696" watchObservedRunningTime="2026-01-27 06:48:51.854667455 +0000 UTC m=+152.961634782" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.855461 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2ggnl" podStartSLOduration=124.855454255 podStartE2EDuration="2m4.855454255s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.811260002 +0000 UTC m=+152.918227329" watchObservedRunningTime="2026-01-27 06:48:51.855454255 +0000 UTC m=+152.962421582" Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.859157 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.359137481 +0000 UTC m=+153.466104808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.894796 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" event={"ID":"192cd808-feeb-4944-a1b3-99109ea0928e","Type":"ContainerStarted","Data":"176dde3c74f1699135fd328d131c9357dc621deb742c350cfb8dcafb7a9d203b"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.895136 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" event={"ID":"192cd808-feeb-4944-a1b3-99109ea0928e","Type":"ContainerStarted","Data":"937e4ed24324dbc3cc1a505a6d4d1807e484040298d16965ea6a0148862817fd"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.908431 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.911381 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" podStartSLOduration=124.911344503 podStartE2EDuration="2m4.911344503s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.879426021 +0000 UTC m=+152.986393358" watchObservedRunningTime="2026-01-27 06:48:51.911344503 +0000 UTC m=+153.018311820" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.932169 4796 generic.go:334] "Generic (PLEG): container finished" podID="3b10ebe2-f762-4ac3-a59a-76aa1256bdf7" containerID="b8b4f157837fb1489035320746bb3548b4e5f8be6b9c41b31686d1e7c8cf0b22" exitCode=0 Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.932333 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" event={"ID":"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7","Type":"ContainerDied","Data":"b8b4f157837fb1489035320746bb3548b4e5f8be6b9c41b31686d1e7c8cf0b22"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.943761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69rqv" event={"ID":"276663de-ff96-4732-911a-fae0b469545e","Type":"ContainerStarted","Data":"d6a5615e0eb24545e4b12dd85cf02a5cb56829135cfebb566fe50ce22a948249"} Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.991961 4796 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-mr9hz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.992013 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" podUID="192cd808-feeb-4944-a1b3-99109ea0928e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 27 06:48:51 crc kubenswrapper[4796]: I0127 06:48:51.993121 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:51 crc kubenswrapper[4796]: E0127 06:48:51.995143 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.495112598 +0000 UTC m=+153.602079925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.000186 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-r49vt" event={"ID":"1b035abe-aba2-464c-bf93-43ca7da14869","Type":"ContainerStarted","Data":"8c2bb91e7c59c828ace2175aab9ecd60099f4ffebb4b683ee6e81bba4e661e17"} Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.011332 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5fxsd" event={"ID":"1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d","Type":"ContainerStarted","Data":"e76fa9601f1fb9c1c670177b925b76403fee1e3003d275d6b26600ff01dc5e4d"} Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.041980 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" event={"ID":"bbe422dd-b194-4dd1-a363-c9cbe72971f9","Type":"ContainerStarted","Data":"0753328d09ffa114738d76084fa1d34dfc587c2e380bc3a33c2be6ac6f120a54"} Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.045797 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" podStartSLOduration=125.04576921 podStartE2EDuration="2m5.04576921s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:51.991630468 +0000 UTC m=+153.098597795" watchObservedRunningTime="2026-01-27 06:48:52.04576921 +0000 UTC m=+153.152736537" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.058474 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qwn64" event={"ID":"663b211e-0671-47e8-ae93-5c177e7a21eb","Type":"ContainerStarted","Data":"fbe82cfa2233c84ded242de33c9421c435ba18f07fe942f923d90e3e16573223"} Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.073650 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-r6xbk" podStartSLOduration=125.073626286 podStartE2EDuration="2m5.073626286s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:52.027952745 +0000 UTC m=+153.134920072" watchObservedRunningTime="2026-01-27 06:48:52.073626286 +0000 UTC m=+153.180593613" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.074402 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" event={"ID":"28c00736-23cd-46f9-b656-94a11f71a470","Type":"ContainerStarted","Data":"a5b3b8b35993af3b1ee52f2d56c2838380e2b4d040de01f3acfbf9024f87174e"} Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.074903 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltfqp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.074972 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltfqp" podUID="f2e2d019-7fb7-4f75-81ee-b20a700c8f0b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.085431 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ts729" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.097659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:52 crc kubenswrapper[4796]: E0127 06:48:52.099814 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.59979964 +0000 UTC m=+153.706766967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.103260 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rzsch" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.111299 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-d75v6" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.114203 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zprl5" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.160925 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" podStartSLOduration=125.160902804 podStartE2EDuration="2m5.160902804s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:52.074263904 +0000 UTC m=+153.181231221" watchObservedRunningTime="2026-01-27 06:48:52.160902804 +0000 UTC m=+153.267870131" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.196951 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5fxsd" podStartSLOduration=7.1969317329999996 podStartE2EDuration="7.196931733s" podCreationTimestamp="2026-01-27 06:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:52.193375781 +0000 UTC m=+153.300343108" watchObservedRunningTime="2026-01-27 06:48:52.196931733 +0000 UTC m=+153.303899060" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.199342 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nh9gk" podStartSLOduration=126.199330706 podStartE2EDuration="2m6.199330706s" podCreationTimestamp="2026-01-27 06:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:52.153576582 +0000 UTC m=+153.260543909" watchObservedRunningTime="2026-01-27 06:48:52.199330706 +0000 UTC m=+153.306298033" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.200251 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:52 crc kubenswrapper[4796]: E0127 06:48:52.202747 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.702718164 +0000 UTC m=+153.809685491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.233369 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-r49vt" podStartSLOduration=7.233351823 podStartE2EDuration="7.233351823s" podCreationTimestamp="2026-01-27 06:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:52.232958283 +0000 UTC m=+153.339925610" watchObservedRunningTime="2026-01-27 06:48:52.233351823 +0000 UTC m=+153.340319150" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.303133 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:52 crc kubenswrapper[4796]: E0127 06:48:52.304134 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.80411718 +0000 UTC m=+153.911084507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.318765 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t5kh8" podStartSLOduration=125.31872303 podStartE2EDuration="2m5.31872303s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:52.283653546 +0000 UTC m=+153.390620873" watchObservedRunningTime="2026-01-27 06:48:52.31872303 +0000 UTC m=+153.425690357" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.380526 4796 csr.go:261] certificate signing request csr-9hwsx is approved, waiting to be issued Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.386051 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" podStartSLOduration=125.386031136 podStartE2EDuration="2m5.386031136s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:52.383000147 +0000 UTC m=+153.489967474" watchObservedRunningTime="2026-01-27 06:48:52.386031136 +0000 UTC m=+153.492998463" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.396725 4796 csr.go:257] certificate signing request csr-9hwsx is issued Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.404346 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:52 crc kubenswrapper[4796]: E0127 06:48:52.404904 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:52.904881619 +0000 UTC m=+154.011848946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.421597 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qwn64" podStartSLOduration=125.421567644 podStartE2EDuration="2m5.421567644s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:52.413936324 +0000 UTC m=+153.520903651" watchObservedRunningTime="2026-01-27 06:48:52.421567644 +0000 UTC m=+153.528534971" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.427643 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j8r59"] Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.429100 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.446631 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.463129 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8r59"] Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.506507 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-utilities\") pod \"certified-operators-j8r59\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.507930 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79thm\" (UniqueName: \"kubernetes.io/projected/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-kube-api-access-79thm\") pod \"certified-operators-j8r59\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.507985 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.508006 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-catalog-content\") pod \"certified-operators-j8r59\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: E0127 06:48:52.508410 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.008394639 +0000 UTC m=+154.115361966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.613340 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.613622 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-utilities\") pod \"certified-operators-j8r59\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.613674 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79thm\" (UniqueName: \"kubernetes.io/projected/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-kube-api-access-79thm\") pod \"certified-operators-j8r59\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.613711 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-catalog-content\") pod \"certified-operators-j8r59\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.614270 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-catalog-content\") pod \"certified-operators-j8r59\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: E0127 06:48:52.614346 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.114331092 +0000 UTC m=+154.221298419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.614572 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-utilities\") pod \"certified-operators-j8r59\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.648653 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.666945 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:48:52 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:48:52 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:48:52 crc kubenswrapper[4796]: healthz check failed Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.667007 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.684164 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79thm\" (UniqueName: \"kubernetes.io/projected/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-kube-api-access-79thm\") pod \"certified-operators-j8r59\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.714688 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:52 crc kubenswrapper[4796]: E0127 06:48:52.715188 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.215168663 +0000 UTC m=+154.322135990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.732856 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cv5hm"] Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.734394 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.742508 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cv5hm"] Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.754613 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.759426 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.820349 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.820664 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-catalog-content\") pod \"community-operators-cv5hm\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.820722 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g55k\" (UniqueName: \"kubernetes.io/projected/242ef06f-796a-4c77-810b-bde4a5fbc087-kube-api-access-6g55k\") pod \"community-operators-cv5hm\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.820811 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-utilities\") pod \"community-operators-cv5hm\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:52 crc kubenswrapper[4796]: E0127 06:48:52.820934 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.320913452 +0000 UTC m=+154.427880779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.898739 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m75r4"] Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.899903 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.923528 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-utilities\") pod \"community-operators-cv5hm\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.923647 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-catalog-content\") pod \"community-operators-cv5hm\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.923704 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g55k\" (UniqueName: \"kubernetes.io/projected/242ef06f-796a-4c77-810b-bde4a5fbc087-kube-api-access-6g55k\") pod \"community-operators-cv5hm\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.923792 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:52 crc kubenswrapper[4796]: E0127 06:48:52.924161 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.424142715 +0000 UTC m=+154.531110042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.925287 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-catalog-content\") pod \"community-operators-cv5hm\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.925683 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-utilities\") pod \"community-operators-cv5hm\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:52 crc kubenswrapper[4796]: I0127 06:48:52.971507 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g55k\" (UniqueName: \"kubernetes.io/projected/242ef06f-796a-4c77-810b-bde4a5fbc087-kube-api-access-6g55k\") pod \"community-operators-cv5hm\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.025823 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9rz8s"] Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.038204 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.052712 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.053673 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.553642803 +0000 UTC m=+154.660610130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.053730 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92w27\" (UniqueName: \"kubernetes.io/projected/8b7db9a0-58a5-496b-826c-6e64920151b8-kube-api-access-92w27\") pod \"certified-operators-m75r4\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.053793 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-utilities\") pod \"certified-operators-m75r4\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.054065 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-catalog-content\") pod \"certified-operators-m75r4\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.054112 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.054630 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.554622149 +0000 UTC m=+154.661589466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.074056 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.078428 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m75r4"] Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.096160 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rz8s"] Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.112524 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" event={"ID":"28c00736-23cd-46f9-b656-94a11f71a470","Type":"ContainerStarted","Data":"d30d111330398a4e320d177445a1fc92b4ff13237ed84dd4d86f5e83d16f898e"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.112577 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pkjfq" event={"ID":"28c00736-23cd-46f9-b656-94a11f71a470","Type":"ContainerStarted","Data":"162abfc32252aa1b4710e6db9b5a8b2a08ab051c8c8a3a19ad720d7f681919b9"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.118012 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" event={"ID":"34e19426-cb00-4e09-9933-57a015735c77","Type":"ContainerStarted","Data":"948261420ea20b271c8e9cfe3b43ae2e4a06dff245e0c90acbe6c600e39c9afd"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.140887 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" event={"ID":"56d7f37b-05cc-4a36-b844-423465e79e8e","Type":"ContainerStarted","Data":"c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.156426 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.156686 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-catalog-content\") pod \"certified-operators-m75r4\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.156730 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47524\" (UniqueName: \"kubernetes.io/projected/3a16a514-cf37-4eb5-a775-6dc2573704cf-kube-api-access-47524\") pod \"community-operators-9rz8s\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.156800 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92w27\" (UniqueName: \"kubernetes.io/projected/8b7db9a0-58a5-496b-826c-6e64920151b8-kube-api-access-92w27\") pod \"certified-operators-m75r4\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.156820 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-utilities\") pod \"certified-operators-m75r4\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.156856 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-catalog-content\") pod \"community-operators-9rz8s\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.156881 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-utilities\") pod \"community-operators-9rz8s\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.156921 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.656888447 +0000 UTC m=+154.763855944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.157335 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-catalog-content\") pod \"certified-operators-m75r4\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.157588 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-utilities\") pod \"certified-operators-m75r4\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.164878 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" event={"ID":"884fd52c-4588-455a-a5b7-b333dc17aa3c","Type":"ContainerStarted","Data":"5eacfad8991645992053ed5101d9e6b8e21890a127d78d1635ebe468958a9f1f"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.180858 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bzdxk" podStartSLOduration=127.180839831 podStartE2EDuration="2m7.180839831s" podCreationTimestamp="2026-01-27 06:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:53.180340129 +0000 UTC m=+154.287307446" watchObservedRunningTime="2026-01-27 06:48:53.180839831 +0000 UTC m=+154.287807158" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.202889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92w27\" (UniqueName: \"kubernetes.io/projected/8b7db9a0-58a5-496b-826c-6e64920151b8-kube-api-access-92w27\") pod \"certified-operators-m75r4\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.210690 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" event={"ID":"3b10ebe2-f762-4ac3-a59a-76aa1256bdf7","Type":"ContainerStarted","Data":"ee7c734388f6fe764b5416bd43058706e3f0d4e8958e0791f07fab5b86292b41"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.226176 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.238332 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69rqv" event={"ID":"276663de-ff96-4732-911a-fae0b469545e","Type":"ContainerStarted","Data":"3762d7d4727e27efd1cc853736cb6565ed37db8b81c7472c109abbc70954e53c"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.242228 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" podStartSLOduration=126.242216313 podStartE2EDuration="2m6.242216313s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:53.241439002 +0000 UTC m=+154.348406339" watchObservedRunningTime="2026-01-27 06:48:53.242216313 +0000 UTC m=+154.349183640" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.259263 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-catalog-content\") pod \"community-operators-9rz8s\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.259343 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-utilities\") pod \"community-operators-9rz8s\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.259571 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.259592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47524\" (UniqueName: \"kubernetes.io/projected/3a16a514-cf37-4eb5-a775-6dc2573704cf-kube-api-access-47524\") pod \"community-operators-9rz8s\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.260082 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-utilities\") pod \"community-operators-9rz8s\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.266655 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-catalog-content\") pod \"community-operators-9rz8s\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.267316 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.767295557 +0000 UTC m=+154.874262884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.296316 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q644h" event={"ID":"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019","Type":"ContainerStarted","Data":"fc9cf775649701520172ace1a4951204aceaaf8e31524b0a1d11f137e8c533f1"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.296400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-q644h" event={"ID":"e3d76e7c-c94d-4b43-82cf-98ca9dc9c019","Type":"ContainerStarted","Data":"5d003f00d4a7e799da70bf216562cf5bd03fbd85baa4a6d0c4c704561413b7f3"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.296778 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-q644h" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.296939 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.298519 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.323160 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.330097 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5fxsd" event={"ID":"1ef2dc43-f6e3-4a23-85bc-f23d912b2d6d","Type":"ContainerStarted","Data":"c258650a47d0ee474b727f9f038eefa04edee47bed7211e7894b1b0359ad1547"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.334316 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" event={"ID":"5d1e76c1-ba8e-41b1-a312-0670b06bc59f","Type":"ContainerStarted","Data":"af46a7212299b42a905c737cf1e73cd649d0aa986150875218a84b5255bc1903"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.341030 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47524\" (UniqueName: \"kubernetes.io/projected/3a16a514-cf37-4eb5-a775-6dc2573704cf-kube-api-access-47524\") pod \"community-operators-9rz8s\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.343226 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.375102 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.380617 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.381226 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cdd44d2-d007-4f73-9203-5922a7d660f5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7cdd44d2-d007-4f73-9203-5922a7d660f5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.381300 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cdd44d2-d007-4f73-9203-5922a7d660f5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7cdd44d2-d007-4f73-9203-5922a7d660f5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.395637 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-q644h" podStartSLOduration=8.395607194 podStartE2EDuration="8.395607194s" podCreationTimestamp="2026-01-27 06:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:53.37204474 +0000 UTC m=+154.479012067" watchObservedRunningTime="2026-01-27 06:48:53.395607194 +0000 UTC m=+154.502574521" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.397354 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" event={"ID":"68d73a51-598c-41e8-9064-3942bd4f93df","Type":"ContainerStarted","Data":"6071144cb97945969223578bdf2d7189f29ef34d725603f5e8da2dd4fc93e644"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.397770 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-27 06:43:52 +0000 UTC, rotation deadline is 2026-11-10 10:24:54.892094237 +0000 UTC Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.397812 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6891h36m1.494284245s for next certificate rotation Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.402930 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.404885 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:53.904842375 +0000 UTC m=+155.011809702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.440923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9t47f" event={"ID":"73db7fe9-d1e0-44f1-88fa-a186dec7a4b0","Type":"ContainerStarted","Data":"99879a75365439678fa7a3c9eea0ff7d7440e2b1e6190542eff04761fb911756"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.463072 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6g4p8" podStartSLOduration=126.463049734 podStartE2EDuration="2m6.463049734s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:53.461593046 +0000 UTC m=+154.568560373" watchObservedRunningTime="2026-01-27 06:48:53.463049734 +0000 UTC m=+154.570017061" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.495131 4796 generic.go:334] "Generic (PLEG): container finished" podID="bf5d3de7-2818-4aa8-9c56-81244d431713" containerID="3547cf461a7aa7e9171f636728a01e969be2d05c0d109bc82b88f719e99fd027" exitCode=0 Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.495230 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" event={"ID":"bf5d3de7-2818-4aa8-9c56-81244d431713","Type":"ContainerDied","Data":"3547cf461a7aa7e9171f636728a01e969be2d05c0d109bc82b88f719e99fd027"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.495259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" event={"ID":"bf5d3de7-2818-4aa8-9c56-81244d431713","Type":"ContainerStarted","Data":"c7dfdfd727600d847ff6ff5982a4f81374c00554a2e4284fa7f00cc8fea5d824"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.514418 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cdd44d2-d007-4f73-9203-5922a7d660f5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7cdd44d2-d007-4f73-9203-5922a7d660f5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.514483 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.514585 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.514639 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.514690 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cdd44d2-d007-4f73-9203-5922a7d660f5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7cdd44d2-d007-4f73-9203-5922a7d660f5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.515057 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cdd44d2-d007-4f73-9203-5922a7d660f5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7cdd44d2-d007-4f73-9203-5922a7d660f5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.520130 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.020108822 +0000 UTC m=+155.127076149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.522164 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-46nhj" podStartSLOduration=126.522146895 podStartE2EDuration="2m6.522146895s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:53.521221571 +0000 UTC m=+154.628188898" watchObservedRunningTime="2026-01-27 06:48:53.522146895 +0000 UTC m=+154.629114222" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.523582 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.531274 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lf8gr" event={"ID":"e05300a3-a8c8-495f-a4fd-79f326ce0d73","Type":"ContainerStarted","Data":"1051f56539c97c629219f759b047be4029db7ffb5e28c7cae91342de19217091"} Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.540448 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.544884 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltfqp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.544955 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltfqp" podUID="f2e2d019-7fb7-4f75-81ee-b20a700c8f0b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.549242 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8r59"] Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.561691 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4ddms" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.576927 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cdd44d2-d007-4f73-9203-5922a7d660f5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7cdd44d2-d007-4f73-9203-5922a7d660f5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.592080 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.626078 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.626796 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.627024 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.629003 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.128962212 +0000 UTC m=+155.235929539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.659528 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.662007 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:48:53 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:48:53 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:48:53 crc kubenswrapper[4796]: healthz check failed Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.666137 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.672510 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:53 crc kubenswrapper[4796]: W0127 06:48:53.672945 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25bb764a_c021_4c8b_b0d4_da2f5ed8a4ae.slice/crio-04d2bd96db3fbd4a0cd9a864af84bd6fb0e38e5b573683956d5e28608c5cce99 WatchSource:0}: Error finding container 04d2bd96db3fbd4a0cd9a864af84bd6fb0e38e5b573683956d5e28608c5cce99: Status 404 returned error can't find the container with id 04d2bd96db3fbd4a0cd9a864af84bd6fb0e38e5b573683956d5e28608c5cce99 Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.694995 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.710457 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.730180 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.730597 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.230580933 +0000 UTC m=+155.337548260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.774424 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.832671 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.833770 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.333740964 +0000 UTC m=+155.440708291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.865060 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.889299 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:53 crc kubenswrapper[4796]: I0127 06:48:53.941008 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:53 crc kubenswrapper[4796]: E0127 06:48:53.941385 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.441369082 +0000 UTC m=+155.548336409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.042125 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:54 crc kubenswrapper[4796]: E0127 06:48:54.042644 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.542624054 +0000 UTC m=+155.649591381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.146576 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:54 crc kubenswrapper[4796]: E0127 06:48:54.146953 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.646934045 +0000 UTC m=+155.753901372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.223727 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m75r4"] Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.254350 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:54 crc kubenswrapper[4796]: E0127 06:48:54.254880 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.75485167 +0000 UTC m=+155.861819007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: W0127 06:48:54.339979 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b7db9a0_58a5_496b_826c_6e64920151b8.slice/crio-eb489537ef135147b1f8b5124e263e752fdd1ae3fd619e665eddc9df7c02f392 WatchSource:0}: Error finding container eb489537ef135147b1f8b5124e263e752fdd1ae3fd619e665eddc9df7c02f392: Status 404 returned error can't find the container with id eb489537ef135147b1f8b5124e263e752fdd1ae3fd619e665eddc9df7c02f392 Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.369346 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:54 crc kubenswrapper[4796]: E0127 06:48:54.369949 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.869929282 +0000 UTC m=+155.976896599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.472621 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:54 crc kubenswrapper[4796]: E0127 06:48:54.473048 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:54.973013751 +0000 UTC m=+156.079981078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.544954 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cv5hm"] Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.562623 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75r4" event={"ID":"8b7db9a0-58a5-496b-826c-6e64920151b8","Type":"ContainerStarted","Data":"eb489537ef135147b1f8b5124e263e752fdd1ae3fd619e665eddc9df7c02f392"} Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.574995 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:54 crc kubenswrapper[4796]: E0127 06:48:54.575444 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:55.075427603 +0000 UTC m=+156.182394920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.587777 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" event={"ID":"bf5d3de7-2818-4aa8-9c56-81244d431713","Type":"ContainerStarted","Data":"25f4d92d8d3db500234a42e4ca0fa202023058b4d7216e99e71eff57cc282f24"} Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.595931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r59" event={"ID":"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae","Type":"ContainerStarted","Data":"30788984015f3669b8d1b3697085641ab64878b7b4fc16bf33709dcad9f9af67"} Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.596002 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r59" event={"ID":"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae","Type":"ContainerStarted","Data":"04d2bd96db3fbd4a0cd9a864af84bd6fb0e38e5b573683956d5e28608c5cce99"} Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.611174 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8lc8b"] Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.623742 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.630267 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.645890 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:48:54 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:48:54 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:48:54 crc kubenswrapper[4796]: healthz check failed Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.645967 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.654461 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lc8b"] Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.658896 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" podStartSLOduration=128.6588584 podStartE2EDuration="2m8.6588584s" podCreationTimestamp="2026-01-27 06:46:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:54.656491628 +0000 UTC m=+155.763458955" watchObservedRunningTime="2026-01-27 06:48:54.6588584 +0000 UTC m=+155.765825747" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.678825 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:54 crc kubenswrapper[4796]: E0127 06:48:54.680591 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:55.180568966 +0000 UTC m=+156.287536293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.785452 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.785887 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7trsn\" (UniqueName: \"kubernetes.io/projected/47a26ac3-524b-47c5-abb8-d2c4837659e7-kube-api-access-7trsn\") pod \"redhat-marketplace-8lc8b\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.785931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-utilities\") pod \"redhat-marketplace-8lc8b\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.785953 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-catalog-content\") pod \"redhat-marketplace-8lc8b\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: E0127 06:48:54.787845 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:55.287820764 +0000 UTC m=+156.394788281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.844583 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9rz8s"] Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.902095 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.902422 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-utilities\") pod \"redhat-marketplace-8lc8b\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.902475 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-catalog-content\") pod \"redhat-marketplace-8lc8b\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.902737 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7trsn\" (UniqueName: \"kubernetes.io/projected/47a26ac3-524b-47c5-abb8-d2c4837659e7-kube-api-access-7trsn\") pod \"redhat-marketplace-8lc8b\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: E0127 06:48:54.904070 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:55.403989475 +0000 UTC m=+156.510956802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.904162 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-utilities\") pod \"redhat-marketplace-8lc8b\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.904561 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-catalog-content\") pod \"redhat-marketplace-8lc8b\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.953249 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7trsn\" (UniqueName: \"kubernetes.io/projected/47a26ac3-524b-47c5-abb8-d2c4837659e7-kube-api-access-7trsn\") pod \"redhat-marketplace-8lc8b\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:54 crc kubenswrapper[4796]: I0127 06:48:54.984481 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.009264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.010341 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:55.510300868 +0000 UTC m=+156.617268195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.015032 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2pn7g"] Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.016316 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: W0127 06:48:55.062594 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-97e23f1c097c9486ac971666d4bc3f251b880703861a56ec7da42a5a7a308346 WatchSource:0}: Error finding container 97e23f1c097c9486ac971666d4bc3f251b880703861a56ec7da42a5a7a308346: Status 404 returned error can't find the container with id 97e23f1c097c9486ac971666d4bc3f251b880703861a56ec7da42a5a7a308346 Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.114130 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pn7g"] Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.119236 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.119558 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47d6z\" (UniqueName: \"kubernetes.io/projected/5397ad23-1136-4577-b821-4199914b8582-kube-api-access-47d6z\") pod \"redhat-marketplace-2pn7g\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.119607 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-utilities\") pod \"redhat-marketplace-2pn7g\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.119671 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-catalog-content\") pod \"redhat-marketplace-2pn7g\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.119801 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:55.619776534 +0000 UTC m=+156.726743861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.119924 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.221202 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47d6z\" (UniqueName: \"kubernetes.io/projected/5397ad23-1136-4577-b821-4199914b8582-kube-api-access-47d6z\") pod \"redhat-marketplace-2pn7g\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.221302 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-utilities\") pod \"redhat-marketplace-2pn7g\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.221359 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.221413 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-catalog-content\") pod \"redhat-marketplace-2pn7g\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.222510 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:55.722484743 +0000 UTC m=+156.829452070 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.225599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-catalog-content\") pod \"redhat-marketplace-2pn7g\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.230957 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-utilities\") pod \"redhat-marketplace-2pn7g\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.293564 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47d6z\" (UniqueName: \"kubernetes.io/projected/5397ad23-1136-4577-b821-4199914b8582-kube-api-access-47d6z\") pod \"redhat-marketplace-2pn7g\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.325222 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.325606 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:55.825587993 +0000 UTC m=+156.932555310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.407730 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.430069 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.430677 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:55.930661414 +0000 UTC m=+157.037628741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.535128 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.535667 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.035645513 +0000 UTC m=+157.142612840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.604715 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mn94j"] Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.605994 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.612787 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.620789 4796 generic.go:334] "Generic (PLEG): container finished" podID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerID="852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad" exitCode=0 Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.621115 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75r4" event={"ID":"8b7db9a0-58a5-496b-826c-6e64920151b8","Type":"ContainerDied","Data":"852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.623298 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mn94j"] Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.627009 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.638748 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.639105 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.139092712 +0000 UTC m=+157.246060029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.648898 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:48:55 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:48:55 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:48:55 crc kubenswrapper[4796]: healthz check failed Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.648977 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.650042 4796 generic.go:334] "Generic (PLEG): container finished" podID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerID="2c924eecca1b1fac5921a41c093b6aca85e75f193d3273a172358603fc6b3eb4" exitCode=0 Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.650142 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cv5hm" event={"ID":"242ef06f-796a-4c77-810b-bde4a5fbc087","Type":"ContainerDied","Data":"2c924eecca1b1fac5921a41c093b6aca85e75f193d3273a172358603fc6b3eb4"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.650174 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cv5hm" event={"ID":"242ef06f-796a-4c77-810b-bde4a5fbc087","Type":"ContainerStarted","Data":"c951f899254af16dcebbdb105e4675d5880438c3391b4ceaa1703b38b1ef9f05"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.685499 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"bd5ae8053ea823f4e1002b2805e883c4ca2d3e5a7c12321bf0e49335c947002e"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.715800 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7cdd44d2-d007-4f73-9203-5922a7d660f5","Type":"ContainerStarted","Data":"f2b74e926be2856ab2df0e24faf38fe1c89527ad9c28127e9b5171706d775184"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.731011 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c80b7cac3d80b963a1cf1896f4975de2d63fa068a37d09468e6c63b5e4db7438"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.732217 4796 generic.go:334] "Generic (PLEG): container finished" podID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerID="30788984015f3669b8d1b3697085641ab64878b7b4fc16bf33709dcad9f9af67" exitCode=0 Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.732337 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r59" event={"ID":"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae","Type":"ContainerDied","Data":"30788984015f3669b8d1b3697085641ab64878b7b4fc16bf33709dcad9f9af67"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.740268 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.740663 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blrrh\" (UniqueName: \"kubernetes.io/projected/ae643542-ca5e-4cee-aaba-818f3d424763-kube-api-access-blrrh\") pod \"redhat-operators-mn94j\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.740793 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-utilities\") pod \"redhat-operators-mn94j\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.740907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-catalog-content\") pod \"redhat-operators-mn94j\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.741458 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.241419211 +0000 UTC m=+157.348386538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.776784 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"97e23f1c097c9486ac971666d4bc3f251b880703861a56ec7da42a5a7a308346"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.803955 4796 generic.go:334] "Generic (PLEG): container finished" podID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerID="47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed" exitCode=0 Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.804687 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rz8s" event={"ID":"3a16a514-cf37-4eb5-a775-6dc2573704cf","Type":"ContainerDied","Data":"47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.804714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rz8s" event={"ID":"3a16a514-cf37-4eb5-a775-6dc2573704cf","Type":"ContainerStarted","Data":"9a748d2d3a84faf782d3f04ba209723d6e85056d78524ea6ad140b17fc322a78"} Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.856434 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-catalog-content\") pod \"redhat-operators-mn94j\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.856631 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.856719 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blrrh\" (UniqueName: \"kubernetes.io/projected/ae643542-ca5e-4cee-aaba-818f3d424763-kube-api-access-blrrh\") pod \"redhat-operators-mn94j\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.856846 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-utilities\") pod \"redhat-operators-mn94j\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.857395 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-utilities\") pod \"redhat-operators-mn94j\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.857800 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-catalog-content\") pod \"redhat-operators-mn94j\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.858480 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.358466994 +0000 UTC m=+157.465434321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.892283 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vrm5z"] Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.897553 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.902672 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blrrh\" (UniqueName: \"kubernetes.io/projected/ae643542-ca5e-4cee-aaba-818f3d424763-kube-api-access-blrrh\") pod \"redhat-operators-mn94j\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.929381 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrm5z"] Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.929893 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:48:55 crc kubenswrapper[4796]: I0127 06:48:55.958638 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:55 crc kubenswrapper[4796]: E0127 06:48:55.959268 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.459250524 +0000 UTC m=+157.566217851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.056317 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lc8b"] Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.060527 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-utilities\") pod \"redhat-operators-vrm5z\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.060598 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.060634 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn6z8\" (UniqueName: \"kubernetes.io/projected/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-kube-api-access-xn6z8\") pod \"redhat-operators-vrm5z\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.060662 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-catalog-content\") pod \"redhat-operators-vrm5z\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.061059 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.561044819 +0000 UTC m=+157.668012136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.165124 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.165557 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-utilities\") pod \"redhat-operators-vrm5z\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.165645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn6z8\" (UniqueName: \"kubernetes.io/projected/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-kube-api-access-xn6z8\") pod \"redhat-operators-vrm5z\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.165686 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-catalog-content\") pod \"redhat-operators-vrm5z\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.166305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-catalog-content\") pod \"redhat-operators-vrm5z\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.166435 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.666411328 +0000 UTC m=+157.773378665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.166742 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-utilities\") pod \"redhat-operators-vrm5z\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.212598 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn6z8\" (UniqueName: \"kubernetes.io/projected/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-kube-api-access-xn6z8\") pod \"redhat-operators-vrm5z\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.266993 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.267461 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.767446575 +0000 UTC m=+157.874413902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.322848 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.332950 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pn7g"] Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.369993 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.370294 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.870257716 +0000 UTC m=+157.977225043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.370529 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.371010 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.870991676 +0000 UTC m=+157.977959003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.481313 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.481915 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:56.981892759 +0000 UTC m=+158.088860086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.585286 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.586251 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:57.08623228 +0000 UTC m=+158.193199607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.643691 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:48:56 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:48:56 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:48:56 crc kubenswrapper[4796]: healthz check failed Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.643766 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.671511 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mn94j"] Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.686826 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.686978 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:57.186947398 +0000 UTC m=+158.293914725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.687300 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.687898 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:57.187887303 +0000 UTC m=+158.294854630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.789634 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.789934 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:57.289900994 +0000 UTC m=+158.396868321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.790154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.790515 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:57.290505509 +0000 UTC m=+158.397472836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.829496 4796 generic.go:334] "Generic (PLEG): container finished" podID="98eaa352-8307-40bd-b8a3-16f2e3088fa4" containerID="f1e16f192d84820afb841d561acdb5fada7821549f4f91e67a81f36670e7e3c5" exitCode=0 Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.829608 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" event={"ID":"98eaa352-8307-40bd-b8a3-16f2e3088fa4","Type":"ContainerDied","Data":"f1e16f192d84820afb841d561acdb5fada7821549f4f91e67a81f36670e7e3c5"} Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.845947 4796 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.880675 4796 generic.go:334] "Generic (PLEG): container finished" podID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerID="3b1074f4e3ad0d39594354e6ae5b575c9eef39a76881fe1e5c2333f5668645d7" exitCode=0 Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.881523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lc8b" event={"ID":"47a26ac3-524b-47c5-abb8-d2c4837659e7","Type":"ContainerDied","Data":"3b1074f4e3ad0d39594354e6ae5b575c9eef39a76881fe1e5c2333f5668645d7"} Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.881585 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lc8b" event={"ID":"47a26ac3-524b-47c5-abb8-d2c4837659e7","Type":"ContainerStarted","Data":"acc3ab06f32197b8362333c6a8ff04de312864dcb7411359653cceedb5acf84a"} Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.897006 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:56 crc kubenswrapper[4796]: E0127 06:48:56.897435 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:57.397410588 +0000 UTC m=+158.504377915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.915556 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ee78e91c9062e07cb0d96692c55d66c9128ec6a11eea4a7eea0bbb3379600096"} Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.918872 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.927343 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pn7g" event={"ID":"5397ad23-1136-4577-b821-4199914b8582","Type":"ContainerStarted","Data":"173ce4d8662014d0c78dfbbfa004d9f03a25f6092be0b2e99a8665d9ea0e969f"} Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.936909 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn94j" event={"ID":"ae643542-ca5e-4cee-aaba-818f3d424763","Type":"ContainerStarted","Data":"d5f04050158d8535553945106832094abf89eeaabbba31534b93cc6e01669eeb"} Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.964789 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69rqv" event={"ID":"276663de-ff96-4732-911a-fae0b469545e","Type":"ContainerStarted","Data":"0cc3755693f99e3d91dce2cdffca4a732c9f2d3a674e07d9db2400fb2c5d0497"} Jan 27 06:48:56 crc kubenswrapper[4796]: I0127 06:48:56.979467 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"10a8e8bf3f5aaf6e846f341b29e9207f630964081bceee7ea30b9f9e225f1e9b"} Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.011622 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:57 crc kubenswrapper[4796]: E0127 06:48:57.012944 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-27 06:48:57.512930792 +0000 UTC m=+158.619898119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-wm5zr" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.022841 4796 generic.go:334] "Generic (PLEG): container finished" podID="7cdd44d2-d007-4f73-9203-5922a7d660f5" containerID="0945bed568d249e3d57585d3ec03c0da17e9c9948f5c3b029ba746f41959b826" exitCode=0 Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.022923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7cdd44d2-d007-4f73-9203-5922a7d660f5","Type":"ContainerDied","Data":"0945bed568d249e3d57585d3ec03c0da17e9c9948f5c3b029ba746f41959b826"} Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.023027 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.023686 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.028243 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.028610 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.035034 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1fcb45c5206891384ce2264440a5b6512652d2e90577d3046c1cd47bb3cffe0a"} Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.079704 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.115429 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.115932 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02b39a4a-b5bc-4367-8463-f772c5768b9c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"02b39a4a-b5bc-4367-8463-f772c5768b9c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.116011 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02b39a4a-b5bc-4367-8463-f772c5768b9c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"02b39a4a-b5bc-4367-8463-f772c5768b9c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:48:57 crc kubenswrapper[4796]: E0127 06:48:57.116236 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-27 06:48:57.616203547 +0000 UTC m=+158.723170874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.204734 4796 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-27T06:48:56.846384897Z","Handler":null,"Name":""} Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.206517 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vrm5z"] Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.207704 4796 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.207736 4796 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.216973 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02b39a4a-b5bc-4367-8463-f772c5768b9c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"02b39a4a-b5bc-4367-8463-f772c5768b9c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.217050 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02b39a4a-b5bc-4367-8463-f772c5768b9c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"02b39a4a-b5bc-4367-8463-f772c5768b9c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.217080 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.217785 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02b39a4a-b5bc-4367-8463-f772c5768b9c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"02b39a4a-b5bc-4367-8463-f772c5768b9c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.224611 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.224654 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.247271 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02b39a4a-b5bc-4367-8463-f772c5768b9c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"02b39a4a-b5bc-4367-8463-f772c5768b9c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:48:57 crc kubenswrapper[4796]: W0127 06:48:57.263052 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f0ca0a_ff9c_4420_92bb_517ca68b906c.slice/crio-2c189a000e3102a41ee1babce31bff4b4e0b9101fee81950c3584f75f55e29f3 WatchSource:0}: Error finding container 2c189a000e3102a41ee1babce31bff4b4e0b9101fee81950c3584f75f55e29f3: Status 404 returned error can't find the container with id 2c189a000e3102a41ee1babce31bff4b4e0b9101fee81950c3584f75f55e29f3 Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.301194 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-wm5zr\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.318396 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.346374 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.483098 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.485747 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltfqp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.485772 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltfqp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.485834 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ltfqp" podUID="f2e2d019-7fb7-4f75-81ee-b20a700c8f0b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.485855 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltfqp" podUID="f2e2d019-7fb7-4f75-81ee-b20a700c8f0b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.514003 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.629937 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.631793 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.643891 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:48:57 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:48:57 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:48:57 crc kubenswrapper[4796]: healthz check failed Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.643949 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.645579 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.870890 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.983427 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.984008 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.989948 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.989994 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.992994 4796 patch_prober.go:28] interesting pod/apiserver-76f77b778f-m8qq4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]log ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]etcd ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/max-in-flight-filter ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 27 06:48:57 crc kubenswrapper[4796]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 27 06:48:57 crc kubenswrapper[4796]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/project.openshift.io-projectcache ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-startinformers ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 27 06:48:57 crc kubenswrapper[4796]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 06:48:57 crc kubenswrapper[4796]: livez check failed Jan 27 06:48:57 crc kubenswrapper[4796]: I0127 06:48:57.993071 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" podUID="bf5d3de7-2818-4aa8-9c56-81244d431713" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.020518 4796 patch_prober.go:28] interesting pod/console-f9d7485db-r6xbk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.020694 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r6xbk" podUID="00061f00-b799-407e-8b71-30de57b92847" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.050179 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wm5zr"] Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.072035 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02b39a4a-b5bc-4367-8463-f772c5768b9c","Type":"ContainerStarted","Data":"22ab03cf99f7f924eb598ef24a3c1a05b1f676f29b85321c61372dc6cdd160cb"} Jan 27 06:48:58 crc kubenswrapper[4796]: W0127 06:48:58.090928 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72371683_98ec_4d1a_a1cf_f9d2b072c3d7.slice/crio-59c5d48238599bc8a275bbf7a5dcc01a2f0bac0d2d976b3487fdc4aa4b49c496 WatchSource:0}: Error finding container 59c5d48238599bc8a275bbf7a5dcc01a2f0bac0d2d976b3487fdc4aa4b49c496: Status 404 returned error can't find the container with id 59c5d48238599bc8a275bbf7a5dcc01a2f0bac0d2d976b3487fdc4aa4b49c496 Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.097324 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69rqv" event={"ID":"276663de-ff96-4732-911a-fae0b469545e","Type":"ContainerStarted","Data":"ae36b8fc8d5670433a65fc11811303bfb2fe08803b09fd3197729888ff555ae3"} Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.097400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-69rqv" event={"ID":"276663de-ff96-4732-911a-fae0b469545e","Type":"ContainerStarted","Data":"d1db7b389a0b04744db14a809aa31be7179610d25e8fb47e6bb835996aa6889f"} Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.101121 4796 generic.go:334] "Generic (PLEG): container finished" podID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerID="8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4" exitCode=0 Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.101253 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrm5z" event={"ID":"e6f0ca0a-ff9c-4420-92bb-517ca68b906c","Type":"ContainerDied","Data":"8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4"} Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.101298 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrm5z" event={"ID":"e6f0ca0a-ff9c-4420-92bb-517ca68b906c","Type":"ContainerStarted","Data":"2c189a000e3102a41ee1babce31bff4b4e0b9101fee81950c3584f75f55e29f3"} Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.106543 4796 generic.go:334] "Generic (PLEG): container finished" podID="5397ad23-1136-4577-b821-4199914b8582" containerID="b010a2a047406e03fc00ee3e681aee8bd04ca70cbb8e8134c3eccd326d3b11e0" exitCode=0 Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.106626 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pn7g" event={"ID":"5397ad23-1136-4577-b821-4199914b8582","Type":"ContainerDied","Data":"b010a2a047406e03fc00ee3e681aee8bd04ca70cbb8e8134c3eccd326d3b11e0"} Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.117858 4796 generic.go:334] "Generic (PLEG): container finished" podID="ae643542-ca5e-4cee-aaba-818f3d424763" containerID="26e7b8ba591ed48d7a1a28141f5bf8dfd8f8a420ac08645cf6b279479662c366" exitCode=0 Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.119272 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn94j" event={"ID":"ae643542-ca5e-4cee-aaba-818f3d424763","Type":"ContainerDied","Data":"26e7b8ba591ed48d7a1a28141f5bf8dfd8f8a420ac08645cf6b279479662c366"} Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.119821 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-69rqv" podStartSLOduration=13.119794238 podStartE2EDuration="13.119794238s" podCreationTimestamp="2026-01-27 06:48:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:58.119256404 +0000 UTC m=+159.226223731" watchObservedRunningTime="2026-01-27 06:48:58.119794238 +0000 UTC m=+159.226761575" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.127923 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pxnl2" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.640268 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.644935 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:48:58 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:48:58 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:48:58 crc kubenswrapper[4796]: healthz check failed Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.645011 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.767630 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.892620 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.895134 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.972173 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaa352-8307-40bd-b8a3-16f2e3088fa4-secret-volume\") pod \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.972312 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cdd44d2-d007-4f73-9203-5922a7d660f5-kube-api-access\") pod \"7cdd44d2-d007-4f73-9203-5922a7d660f5\" (UID: \"7cdd44d2-d007-4f73-9203-5922a7d660f5\") " Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.972382 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bz49\" (UniqueName: \"kubernetes.io/projected/98eaa352-8307-40bd-b8a3-16f2e3088fa4-kube-api-access-8bz49\") pod \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.972412 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaa352-8307-40bd-b8a3-16f2e3088fa4-config-volume\") pod \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\" (UID: \"98eaa352-8307-40bd-b8a3-16f2e3088fa4\") " Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.972454 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cdd44d2-d007-4f73-9203-5922a7d660f5-kubelet-dir\") pod \"7cdd44d2-d007-4f73-9203-5922a7d660f5\" (UID: \"7cdd44d2-d007-4f73-9203-5922a7d660f5\") " Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.974049 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98eaa352-8307-40bd-b8a3-16f2e3088fa4-config-volume" (OuterVolumeSpecName: "config-volume") pod "98eaa352-8307-40bd-b8a3-16f2e3088fa4" (UID: "98eaa352-8307-40bd-b8a3-16f2e3088fa4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.974298 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cdd44d2-d007-4f73-9203-5922a7d660f5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7cdd44d2-d007-4f73-9203-5922a7d660f5" (UID: "7cdd44d2-d007-4f73-9203-5922a7d660f5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.975213 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98eaa352-8307-40bd-b8a3-16f2e3088fa4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.975238 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cdd44d2-d007-4f73-9203-5922a7d660f5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.983309 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98eaa352-8307-40bd-b8a3-16f2e3088fa4-kube-api-access-8bz49" (OuterVolumeSpecName: "kube-api-access-8bz49") pod "98eaa352-8307-40bd-b8a3-16f2e3088fa4" (UID: "98eaa352-8307-40bd-b8a3-16f2e3088fa4"). InnerVolumeSpecName "kube-api-access-8bz49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.984766 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cdd44d2-d007-4f73-9203-5922a7d660f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7cdd44d2-d007-4f73-9203-5922a7d660f5" (UID: "7cdd44d2-d007-4f73-9203-5922a7d660f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:48:58 crc kubenswrapper[4796]: I0127 06:48:58.987056 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98eaa352-8307-40bd-b8a3-16f2e3088fa4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "98eaa352-8307-40bd-b8a3-16f2e3088fa4" (UID: "98eaa352-8307-40bd-b8a3-16f2e3088fa4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.076721 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cdd44d2-d007-4f73-9203-5922a7d660f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.076773 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bz49\" (UniqueName: \"kubernetes.io/projected/98eaa352-8307-40bd-b8a3-16f2e3088fa4-kube-api-access-8bz49\") on node \"crc\" DevicePath \"\"" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.076792 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98eaa352-8307-40bd-b8a3-16f2e3088fa4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.174047 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.175057 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7cdd44d2-d007-4f73-9203-5922a7d660f5","Type":"ContainerDied","Data":"f2b74e926be2856ab2df0e24faf38fe1c89527ad9c28127e9b5171706d775184"} Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.175220 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b74e926be2856ab2df0e24faf38fe1c89527ad9c28127e9b5171706d775184" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.177878 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02b39a4a-b5bc-4367-8463-f772c5768b9c","Type":"ContainerStarted","Data":"397c99cdcc4a80e69912dff634100144a18b50002f49574196dfc6855aab6b5a"} Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.182070 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" event={"ID":"72371683-98ec-4d1a-a1cf-f9d2b072c3d7","Type":"ContainerStarted","Data":"b8483970e1f5fddb5da373d8843911b3817b940030d8cf155e47ee3925d705b8"} Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.182096 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" event={"ID":"72371683-98ec-4d1a-a1cf-f9d2b072c3d7","Type":"ContainerStarted","Data":"59c5d48238599bc8a275bbf7a5dcc01a2f0bac0d2d976b3487fdc4aa4b49c496"} Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.182496 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.184635 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.186445 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491605-kd67n" event={"ID":"98eaa352-8307-40bd-b8a3-16f2e3088fa4","Type":"ContainerDied","Data":"61f04f2e352b169d1dde701943e15dff131c54afc02a557fb203f8e0f238a58c"} Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.186599 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61f04f2e352b169d1dde701943e15dff131c54afc02a557fb203f8e0f238a58c" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.225476 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.225441361 podStartE2EDuration="2.225441361s" podCreationTimestamp="2026-01-27 06:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:59.199992587 +0000 UTC m=+160.306959914" watchObservedRunningTime="2026-01-27 06:48:59.225441361 +0000 UTC m=+160.332408688" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.643010 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:48:59 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:48:59 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:48:59 crc kubenswrapper[4796]: healthz check failed Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.652327 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:48:59 crc kubenswrapper[4796]: I0127 06:48:59.959426 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" podStartSLOduration=132.959403298 podStartE2EDuration="2m12.959403298s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:48:59.235075443 +0000 UTC m=+160.342042770" watchObservedRunningTime="2026-01-27 06:48:59.959403298 +0000 UTC m=+161.066370625" Jan 27 06:49:00 crc kubenswrapper[4796]: I0127 06:49:00.203738 4796 generic.go:334] "Generic (PLEG): container finished" podID="02b39a4a-b5bc-4367-8463-f772c5768b9c" containerID="397c99cdcc4a80e69912dff634100144a18b50002f49574196dfc6855aab6b5a" exitCode=0 Jan 27 06:49:00 crc kubenswrapper[4796]: I0127 06:49:00.203810 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02b39a4a-b5bc-4367-8463-f772c5768b9c","Type":"ContainerDied","Data":"397c99cdcc4a80e69912dff634100144a18b50002f49574196dfc6855aab6b5a"} Jan 27 06:49:00 crc kubenswrapper[4796]: I0127 06:49:00.641910 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:00 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:00 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:00 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:00 crc kubenswrapper[4796]: I0127 06:49:00.642084 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:01 crc kubenswrapper[4796]: I0127 06:49:01.647808 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:01 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:01 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:01 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:01 crc kubenswrapper[4796]: I0127 06:49:01.647868 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:01 crc kubenswrapper[4796]: I0127 06:49:01.714010 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:49:01 crc kubenswrapper[4796]: I0127 06:49:01.758196 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02b39a4a-b5bc-4367-8463-f772c5768b9c-kubelet-dir\") pod \"02b39a4a-b5bc-4367-8463-f772c5768b9c\" (UID: \"02b39a4a-b5bc-4367-8463-f772c5768b9c\") " Jan 27 06:49:01 crc kubenswrapper[4796]: I0127 06:49:01.758306 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02b39a4a-b5bc-4367-8463-f772c5768b9c-kube-api-access\") pod \"02b39a4a-b5bc-4367-8463-f772c5768b9c\" (UID: \"02b39a4a-b5bc-4367-8463-f772c5768b9c\") " Jan 27 06:49:01 crc kubenswrapper[4796]: I0127 06:49:01.758380 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02b39a4a-b5bc-4367-8463-f772c5768b9c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "02b39a4a-b5bc-4367-8463-f772c5768b9c" (UID: "02b39a4a-b5bc-4367-8463-f772c5768b9c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:49:01 crc kubenswrapper[4796]: I0127 06:49:01.758644 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02b39a4a-b5bc-4367-8463-f772c5768b9c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:01 crc kubenswrapper[4796]: I0127 06:49:01.764260 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b39a4a-b5bc-4367-8463-f772c5768b9c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "02b39a4a-b5bc-4367-8463-f772c5768b9c" (UID: "02b39a4a-b5bc-4367-8463-f772c5768b9c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:49:01 crc kubenswrapper[4796]: I0127 06:49:01.860554 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02b39a4a-b5bc-4367-8463-f772c5768b9c-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:02 crc kubenswrapper[4796]: I0127 06:49:02.242883 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"02b39a4a-b5bc-4367-8463-f772c5768b9c","Type":"ContainerDied","Data":"22ab03cf99f7f924eb598ef24a3c1a05b1f676f29b85321c61372dc6cdd160cb"} Jan 27 06:49:02 crc kubenswrapper[4796]: I0127 06:49:02.242936 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ab03cf99f7f924eb598ef24a3c1a05b1f676f29b85321c61372dc6cdd160cb" Jan 27 06:49:02 crc kubenswrapper[4796]: I0127 06:49:02.242979 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 27 06:49:02 crc kubenswrapper[4796]: E0127 06:49:02.299868 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod02b39a4a_b5bc_4367_8463_f772c5768b9c.slice\": RecentStats: unable to find data in memory cache]" Jan 27 06:49:02 crc kubenswrapper[4796]: I0127 06:49:02.640734 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:02 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:02 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:02 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:02 crc kubenswrapper[4796]: I0127 06:49:02.640946 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:02 crc kubenswrapper[4796]: I0127 06:49:02.988599 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:49:02 crc kubenswrapper[4796]: I0127 06:49:02.994655 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-m8qq4" Jan 27 06:49:03 crc kubenswrapper[4796]: I0127 06:49:03.641987 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:03 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:03 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:03 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:03 crc kubenswrapper[4796]: I0127 06:49:03.642751 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:03 crc kubenswrapper[4796]: I0127 06:49:03.705861 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-q644h" Jan 27 06:49:03 crc kubenswrapper[4796]: I0127 06:49:03.790042 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:49:03 crc kubenswrapper[4796]: I0127 06:49:03.790181 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:49:04 crc kubenswrapper[4796]: I0127 06:49:04.641334 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:04 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:04 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:04 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:04 crc kubenswrapper[4796]: I0127 06:49:04.641406 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:05 crc kubenswrapper[4796]: I0127 06:49:05.639752 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:05 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:05 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:05 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:05 crc kubenswrapper[4796]: I0127 06:49:05.639850 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:06 crc kubenswrapper[4796]: I0127 06:49:06.639948 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:06 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:06 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:06 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:06 crc kubenswrapper[4796]: I0127 06:49:06.640034 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:07 crc kubenswrapper[4796]: I0127 06:49:07.484905 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltfqp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 27 06:49:07 crc kubenswrapper[4796]: I0127 06:49:07.485282 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ltfqp" podUID="f2e2d019-7fb7-4f75-81ee-b20a700c8f0b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 27 06:49:07 crc kubenswrapper[4796]: I0127 06:49:07.484971 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-ltfqp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" start-of-body= Jan 27 06:49:07 crc kubenswrapper[4796]: I0127 06:49:07.485381 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ltfqp" podUID="f2e2d019-7fb7-4f75-81ee-b20a700c8f0b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.19:8080/\": dial tcp 10.217.0.19:8080: connect: connection refused" Jan 27 06:49:07 crc kubenswrapper[4796]: I0127 06:49:07.642347 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:07 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:07 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:07 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:07 crc kubenswrapper[4796]: I0127 06:49:07.642438 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:07 crc kubenswrapper[4796]: I0127 06:49:07.990298 4796 patch_prober.go:28] interesting pod/console-f9d7485db-r6xbk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 27 06:49:07 crc kubenswrapper[4796]: I0127 06:49:07.990362 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r6xbk" podUID="00061f00-b799-407e-8b71-30de57b92847" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 27 06:49:08 crc kubenswrapper[4796]: I0127 06:49:08.639307 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:08 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:08 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:08 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:08 crc kubenswrapper[4796]: I0127 06:49:08.639416 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:09 crc kubenswrapper[4796]: I0127 06:49:09.640473 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:09 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:09 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:09 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:09 crc kubenswrapper[4796]: I0127 06:49:09.640605 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:10 crc kubenswrapper[4796]: I0127 06:49:10.406159 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:49:10 crc kubenswrapper[4796]: I0127 06:49:10.415676 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09-metrics-certs\") pod \"network-metrics-daemon-gvx56\" (UID: \"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09\") " pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:49:10 crc kubenswrapper[4796]: I0127 06:49:10.641570 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:10 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:10 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:10 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:10 crc kubenswrapper[4796]: I0127 06:49:10.641677 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:10 crc kubenswrapper[4796]: I0127 06:49:10.671404 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gvx56" Jan 27 06:49:11 crc kubenswrapper[4796]: I0127 06:49:11.641731 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:11 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Jan 27 06:49:11 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:11 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:11 crc kubenswrapper[4796]: I0127 06:49:11.642504 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:11 crc kubenswrapper[4796]: I0127 06:49:11.827900 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mr9hz"] Jan 27 06:49:11 crc kubenswrapper[4796]: I0127 06:49:11.828152 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" podUID="192cd808-feeb-4944-a1b3-99109ea0928e" containerName="controller-manager" containerID="cri-o://176dde3c74f1699135fd328d131c9357dc621deb742c350cfb8dcafb7a9d203b" gracePeriod=30 Jan 27 06:49:11 crc kubenswrapper[4796]: I0127 06:49:11.840056 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj"] Jan 27 06:49:11 crc kubenswrapper[4796]: I0127 06:49:11.840278 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" podUID="1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" containerName="route-controller-manager" containerID="cri-o://4d284c480f5fad2645273a62e245e43f50a3a8013a3fa00d9f985fc261bf607a" gracePeriod=30 Jan 27 06:49:12 crc kubenswrapper[4796]: I0127 06:49:12.641238 4796 patch_prober.go:28] interesting pod/router-default-5444994796-qwn64 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 27 06:49:12 crc kubenswrapper[4796]: [+]has-synced ok Jan 27 06:49:12 crc kubenswrapper[4796]: [+]process-running ok Jan 27 06:49:12 crc kubenswrapper[4796]: healthz check failed Jan 27 06:49:12 crc kubenswrapper[4796]: I0127 06:49:12.641735 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qwn64" podUID="663b211e-0671-47e8-ae93-5c177e7a21eb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:49:13 crc kubenswrapper[4796]: I0127 06:49:13.332162 4796 generic.go:334] "Generic (PLEG): container finished" podID="192cd808-feeb-4944-a1b3-99109ea0928e" containerID="176dde3c74f1699135fd328d131c9357dc621deb742c350cfb8dcafb7a9d203b" exitCode=0 Jan 27 06:49:13 crc kubenswrapper[4796]: I0127 06:49:13.332377 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" event={"ID":"192cd808-feeb-4944-a1b3-99109ea0928e","Type":"ContainerDied","Data":"176dde3c74f1699135fd328d131c9357dc621deb742c350cfb8dcafb7a9d203b"} Jan 27 06:49:13 crc kubenswrapper[4796]: I0127 06:49:13.336713 4796 generic.go:334] "Generic (PLEG): container finished" podID="1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" containerID="4d284c480f5fad2645273a62e245e43f50a3a8013a3fa00d9f985fc261bf607a" exitCode=0 Jan 27 06:49:13 crc kubenswrapper[4796]: I0127 06:49:13.336827 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" event={"ID":"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0","Type":"ContainerDied","Data":"4d284c480f5fad2645273a62e245e43f50a3a8013a3fa00d9f985fc261bf607a"} Jan 27 06:49:13 crc kubenswrapper[4796]: I0127 06:49:13.646699 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:49:13 crc kubenswrapper[4796]: I0127 06:49:13.649720 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qwn64" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.500373 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ltfqp" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.525175 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.732710 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.739165 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.762174 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d"] Jan 27 06:49:17 crc kubenswrapper[4796]: E0127 06:49:17.765578 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cdd44d2-d007-4f73-9203-5922a7d660f5" containerName="pruner" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.765787 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cdd44d2-d007-4f73-9203-5922a7d660f5" containerName="pruner" Jan 27 06:49:17 crc kubenswrapper[4796]: E0127 06:49:17.765857 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" containerName="route-controller-manager" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.765919 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" containerName="route-controller-manager" Jan 27 06:49:17 crc kubenswrapper[4796]: E0127 06:49:17.765983 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eaa352-8307-40bd-b8a3-16f2e3088fa4" containerName="collect-profiles" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.766044 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eaa352-8307-40bd-b8a3-16f2e3088fa4" containerName="collect-profiles" Jan 27 06:49:17 crc kubenswrapper[4796]: E0127 06:49:17.766101 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b39a4a-b5bc-4367-8463-f772c5768b9c" containerName="pruner" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.766157 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b39a4a-b5bc-4367-8463-f772c5768b9c" containerName="pruner" Jan 27 06:49:17 crc kubenswrapper[4796]: E0127 06:49:17.766214 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192cd808-feeb-4944-a1b3-99109ea0928e" containerName="controller-manager" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.766267 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="192cd808-feeb-4944-a1b3-99109ea0928e" containerName="controller-manager" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.766429 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" containerName="route-controller-manager" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.766499 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cdd44d2-d007-4f73-9203-5922a7d660f5" containerName="pruner" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.766578 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="98eaa352-8307-40bd-b8a3-16f2e3088fa4" containerName="collect-profiles" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.766645 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="192cd808-feeb-4944-a1b3-99109ea0928e" containerName="controller-manager" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.766718 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b39a4a-b5bc-4367-8463-f772c5768b9c" containerName="pruner" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.767814 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.816653 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-client-ca\") pod \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.817849 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-client-ca" (OuterVolumeSpecName: "client-ca") pod "1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" (UID: "1bb7a5d9-b5b4-41b6-ac58-a33656c85de0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.819222 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5bms\" (UniqueName: \"kubernetes.io/projected/192cd808-feeb-4944-a1b3-99109ea0928e-kube-api-access-v5bms\") pod \"192cd808-feeb-4944-a1b3-99109ea0928e\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.819376 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cd808-feeb-4944-a1b3-99109ea0928e-serving-cert\") pod \"192cd808-feeb-4944-a1b3-99109ea0928e\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.819562 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-proxy-ca-bundles\") pod \"192cd808-feeb-4944-a1b3-99109ea0928e\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.819661 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-serving-cert\") pod \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.819744 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-client-ca\") pod \"192cd808-feeb-4944-a1b3-99109ea0928e\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.819821 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-config\") pod \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.822460 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-config\") pod \"192cd808-feeb-4944-a1b3-99109ea0928e\" (UID: \"192cd808-feeb-4944-a1b3-99109ea0928e\") " Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.820104 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "192cd808-feeb-4944-a1b3-99109ea0928e" (UID: "192cd808-feeb-4944-a1b3-99109ea0928e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.820777 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-client-ca" (OuterVolumeSpecName: "client-ca") pod "192cd808-feeb-4944-a1b3-99109ea0928e" (UID: "192cd808-feeb-4944-a1b3-99109ea0928e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.821035 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-config" (OuterVolumeSpecName: "config") pod "1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" (UID: "1bb7a5d9-b5b4-41b6-ac58-a33656c85de0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.822763 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d"] Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.823028 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvpf\" (UniqueName: \"kubernetes.io/projected/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-kube-api-access-hqvpf\") pod \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\" (UID: \"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0\") " Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.823387 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-config\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.824007 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede723b6-b6d8-4b81-aa39-47241110f7a5-serving-cert\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.823645 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-config" (OuterVolumeSpecName: "config") pod "192cd808-feeb-4944-a1b3-99109ea0928e" (UID: "192cd808-feeb-4944-a1b3-99109ea0928e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.824368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd85q\" (UniqueName: \"kubernetes.io/projected/ede723b6-b6d8-4b81-aa39-47241110f7a5-kube-api-access-qd85q\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.824520 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-client-ca\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.824783 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.824883 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.825026 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.825181 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/192cd808-feeb-4944-a1b3-99109ea0928e-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.825286 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.825515 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192cd808-feeb-4944-a1b3-99109ea0928e-kube-api-access-v5bms" (OuterVolumeSpecName: "kube-api-access-v5bms") pod "192cd808-feeb-4944-a1b3-99109ea0928e" (UID: "192cd808-feeb-4944-a1b3-99109ea0928e"). InnerVolumeSpecName "kube-api-access-v5bms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.827021 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/192cd808-feeb-4944-a1b3-99109ea0928e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "192cd808-feeb-4944-a1b3-99109ea0928e" (UID: "192cd808-feeb-4944-a1b3-99109ea0928e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.827283 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" (UID: "1bb7a5d9-b5b4-41b6-ac58-a33656c85de0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.833257 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-kube-api-access-hqvpf" (OuterVolumeSpecName: "kube-api-access-hqvpf") pod "1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" (UID: "1bb7a5d9-b5b4-41b6-ac58-a33656c85de0"). InnerVolumeSpecName "kube-api-access-hqvpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.926971 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede723b6-b6d8-4b81-aa39-47241110f7a5-serving-cert\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.927049 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd85q\" (UniqueName: \"kubernetes.io/projected/ede723b6-b6d8-4b81-aa39-47241110f7a5-kube-api-access-qd85q\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.927084 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-client-ca\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.927121 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-config\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.927300 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.927731 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvpf\" (UniqueName: \"kubernetes.io/projected/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0-kube-api-access-hqvpf\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.927761 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5bms\" (UniqueName: \"kubernetes.io/projected/192cd808-feeb-4944-a1b3-99109ea0928e-kube-api-access-v5bms\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.927776 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/192cd808-feeb-4944-a1b3-99109ea0928e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.990023 4796 patch_prober.go:28] interesting pod/console-f9d7485db-r6xbk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 27 06:49:17 crc kubenswrapper[4796]: I0127 06:49:17.990092 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-r6xbk" podUID="00061f00-b799-407e-8b71-30de57b92847" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.017302 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-client-ca\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.026597 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede723b6-b6d8-4b81-aa39-47241110f7a5-serving-cert\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.027157 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-config\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.027863 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd85q\" (UniqueName: \"kubernetes.io/projected/ede723b6-b6d8-4b81-aa39-47241110f7a5-kube-api-access-qd85q\") pod \"route-controller-manager-748b7b7cd9-rr79d\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.156628 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.373076 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" event={"ID":"192cd808-feeb-4944-a1b3-99109ea0928e","Type":"ContainerDied","Data":"937e4ed24324dbc3cc1a505a6d4d1807e484040298d16965ea6a0148862817fd"} Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.373133 4796 scope.go:117] "RemoveContainer" containerID="176dde3c74f1699135fd328d131c9357dc621deb742c350cfb8dcafb7a9d203b" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.373243 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-mr9hz" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.390938 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" event={"ID":"1bb7a5d9-b5b4-41b6-ac58-a33656c85de0","Type":"ContainerDied","Data":"8e0f3ed8bd2c7e20f6421ea32ae795190bd247b0a4587a9522357db1dea6ad47"} Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.391055 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.417872 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mr9hz"] Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.421664 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-mr9hz"] Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.427693 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj"] Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.431489 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj"] Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.474934 4796 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-428lj container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.475065 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-428lj" podUID="1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.759075 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192cd808-feeb-4944-a1b3-99109ea0928e" path="/var/lib/kubelet/pods/192cd808-feeb-4944-a1b3-99109ea0928e/volumes" Jan 27 06:49:18 crc kubenswrapper[4796]: I0127 06:49:18.761585 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bb7a5d9-b5b4-41b6-ac58-a33656c85de0" path="/var/lib/kubelet/pods/1bb7a5d9-b5b4-41b6-ac58-a33656c85de0/volumes" Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.970472 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2"] Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.971356 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.976896 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.976990 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.976995 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.977492 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.977697 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.977811 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.985194 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:49:19 crc kubenswrapper[4796]: I0127 06:49:19.996424 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2"] Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.054412 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-client-ca\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.054490 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-proxy-ca-bundles\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.054854 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-serving-cert\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.054915 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdqc\" (UniqueName: \"kubernetes.io/projected/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-kube-api-access-tfdqc\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.054973 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-config\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.156625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-serving-cert\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.156707 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfdqc\" (UniqueName: \"kubernetes.io/projected/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-kube-api-access-tfdqc\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.156766 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-config\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.156809 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-client-ca\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.156844 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-proxy-ca-bundles\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.158044 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-client-ca\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.159136 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-proxy-ca-bundles\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.159970 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-config\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.165765 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-serving-cert\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.185462 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfdqc\" (UniqueName: \"kubernetes.io/projected/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-kube-api-access-tfdqc\") pod \"controller-manager-5d4dfb9c5f-9gdn2\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:20 crc kubenswrapper[4796]: I0127 06:49:20.340087 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:25 crc kubenswrapper[4796]: E0127 06:49:25.664775 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 06:49:25 crc kubenswrapper[4796]: E0127 06:49:25.665300 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47524,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9rz8s_openshift-marketplace(3a16a514-cf37-4eb5-a775-6dc2573704cf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:49:25 crc kubenswrapper[4796]: E0127 06:49:25.666768 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9rz8s" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" Jan 27 06:49:27 crc kubenswrapper[4796]: I0127 06:49:27.994133 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:49:27 crc kubenswrapper[4796]: I0127 06:49:27.997556 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:49:28 crc kubenswrapper[4796]: I0127 06:49:28.232107 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sk2nz" Jan 27 06:49:30 crc kubenswrapper[4796]: E0127 06:49:30.471463 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9rz8s" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" Jan 27 06:49:30 crc kubenswrapper[4796]: E0127 06:49:30.914514 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 06:49:30 crc kubenswrapper[4796]: E0127 06:49:30.915618 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-blrrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mn94j_openshift-marketplace(ae643542-ca5e-4cee-aaba-818f3d424763): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:49:30 crc kubenswrapper[4796]: E0127 06:49:30.916868 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mn94j" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" Jan 27 06:49:31 crc kubenswrapper[4796]: I0127 06:49:31.798325 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2"] Jan 27 06:49:31 crc kubenswrapper[4796]: I0127 06:49:31.903989 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d"] Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.626759 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.628020 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.630096 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.632409 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.632415 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.673670 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cb2188d-d559-4429-9997-5e7729ecc05b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3cb2188d-d559-4429-9997-5e7729ecc05b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.673773 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cb2188d-d559-4429-9997-5e7729ecc05b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3cb2188d-d559-4429-9997-5e7729ecc05b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.777573 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cb2188d-d559-4429-9997-5e7729ecc05b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3cb2188d-d559-4429-9997-5e7729ecc05b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.777635 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cb2188d-d559-4429-9997-5e7729ecc05b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3cb2188d-d559-4429-9997-5e7729ecc05b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.777697 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cb2188d-d559-4429-9997-5e7729ecc05b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3cb2188d-d559-4429-9997-5e7729ecc05b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.788639 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.788712 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.804627 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cb2188d-d559-4429-9997-5e7729ecc05b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3cb2188d-d559-4429-9997-5e7729ecc05b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.917329 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 27 06:49:33 crc kubenswrapper[4796]: I0127 06:49:33.950941 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:34 crc kubenswrapper[4796]: E0127 06:49:34.126616 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mn94j" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" Jan 27 06:49:36 crc kubenswrapper[4796]: E0127 06:49:36.075101 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 06:49:36 crc kubenswrapper[4796]: E0127 06:49:36.075313 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79thm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-j8r59_openshift-marketplace(25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:49:36 crc kubenswrapper[4796]: E0127 06:49:36.076571 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-j8r59" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" Jan 27 06:49:37 crc kubenswrapper[4796]: E0127 06:49:37.145838 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-j8r59" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" Jan 27 06:49:37 crc kubenswrapper[4796]: I0127 06:49:37.180575 4796 scope.go:117] "RemoveContainer" containerID="4d284c480f5fad2645273a62e245e43f50a3a8013a3fa00d9f985fc261bf607a" Jan 27 06:49:37 crc kubenswrapper[4796]: E0127 06:49:37.323058 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 06:49:37 crc kubenswrapper[4796]: E0127 06:49:37.323684 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47d6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2pn7g_openshift-marketplace(5397ad23-1136-4577-b821-4199914b8582): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:49:37 crc kubenswrapper[4796]: E0127 06:49:37.325727 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2pn7g" podUID="5397ad23-1136-4577-b821-4199914b8582" Jan 27 06:49:37 crc kubenswrapper[4796]: I0127 06:49:37.392228 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gvx56"] Jan 27 06:49:37 crc kubenswrapper[4796]: I0127 06:49:37.486856 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2"] Jan 27 06:49:37 crc kubenswrapper[4796]: I0127 06:49:37.510402 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gvx56" event={"ID":"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09","Type":"ContainerStarted","Data":"f4b8459d0a3a42e84cd16fb245d5de852de513fefd3ebd2f4dce903c109d0976"} Jan 27 06:49:37 crc kubenswrapper[4796]: I0127 06:49:37.511464 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 27 06:49:37 crc kubenswrapper[4796]: I0127 06:49:37.513315 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" event={"ID":"53b2fdaf-6df8-46ab-bdeb-55b932b32d54","Type":"ContainerStarted","Data":"10039273fcc68d3db0a37b0b8caa430209ed6f85301c36adc688e4879bc0866d"} Jan 27 06:49:37 crc kubenswrapper[4796]: E0127 06:49:37.514935 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2pn7g" podUID="5397ad23-1136-4577-b821-4199914b8582" Jan 27 06:49:37 crc kubenswrapper[4796]: I0127 06:49:37.773343 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d"] Jan 27 06:49:37 crc kubenswrapper[4796]: W0127 06:49:37.784413 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podede723b6_b6d8_4b81_aa39_47241110f7a5.slice/crio-2ad2120310472dcdcd25a0943bb1b69be6583b063e217154a031d1fe7ee2b4bc WatchSource:0}: Error finding container 2ad2120310472dcdcd25a0943bb1b69be6583b063e217154a031d1fe7ee2b4bc: Status 404 returned error can't find the container with id 2ad2120310472dcdcd25a0943bb1b69be6583b063e217154a031d1fe7ee2b4bc Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.284309 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.284795 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xn6z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vrm5z_openshift-marketplace(e6f0ca0a-ff9c-4420-92bb-517ca68b906c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.285978 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vrm5z" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.481075 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.481234 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6g55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-cv5hm_openshift-marketplace(242ef06f-796a-4c77-810b-bde4a5fbc087): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.482707 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-cv5hm" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.520765 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" podUID="ede723b6-b6d8-4b81-aa39-47241110f7a5" containerName="route-controller-manager" containerID="cri-o://40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256" gracePeriod=30 Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.521272 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" event={"ID":"ede723b6-b6d8-4b81-aa39-47241110f7a5","Type":"ContainerStarted","Data":"40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256"} Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.521337 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" event={"ID":"ede723b6-b6d8-4b81-aa39-47241110f7a5","Type":"ContainerStarted","Data":"2ad2120310472dcdcd25a0943bb1b69be6583b063e217154a031d1fe7ee2b4bc"} Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.521704 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.526318 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" event={"ID":"53b2fdaf-6df8-46ab-bdeb-55b932b32d54","Type":"ContainerStarted","Data":"7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a"} Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.526687 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.526569 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" podUID="53b2fdaf-6df8-46ab-bdeb-55b932b32d54" containerName="controller-manager" containerID="cri-o://7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a" gracePeriod=30 Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.533345 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gvx56" event={"ID":"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09","Type":"ContainerStarted","Data":"634e99ace24f2e4a04a9cb6cbcf07c1e2fd3665455f9bf80501e3deccafae321"} Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.539477 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.539748 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" podStartSLOduration=27.539726403 podStartE2EDuration="27.539726403s" podCreationTimestamp="2026-01-27 06:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:38.538772227 +0000 UTC m=+199.645739574" watchObservedRunningTime="2026-01-27 06:49:38.539726403 +0000 UTC m=+199.646693730" Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.540802 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3cb2188d-d559-4429-9997-5e7729ecc05b","Type":"ContainerStarted","Data":"5783b7d490e2b49d23c718bcb4a31c129213f0385f44a21c6b0b8dd04eff54d5"} Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.540833 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3cb2188d-d559-4429-9997-5e7729ecc05b","Type":"ContainerStarted","Data":"93bdd8f081fc1f1a9b1d30086e88fc50c2454f20d5ee05639133af5c01259b4a"} Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.541582 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-cv5hm" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.542696 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vrm5z" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.558687 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" podStartSLOduration=27.558667116 podStartE2EDuration="27.558667116s" podCreationTimestamp="2026-01-27 06:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:38.556052009 +0000 UTC m=+199.663019346" watchObservedRunningTime="2026-01-27 06:49:38.558667116 +0000 UTC m=+199.665634443" Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.575659 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=5.5756398 podStartE2EDuration="5.5756398s" podCreationTimestamp="2026-01-27 06:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:38.574799777 +0000 UTC m=+199.681767104" watchObservedRunningTime="2026-01-27 06:49:38.5756398 +0000 UTC m=+199.682607117" Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.724370 4796 patch_prober.go:28] interesting pod/route-controller-manager-748b7b7cd9-rr79d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:59146->10.217.0.54:8443: read: connection reset by peer" start-of-body= Jan 27 06:49:38 crc kubenswrapper[4796]: I0127 06:49:38.724429 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" podUID="ede723b6-b6d8-4b81-aa39-47241110f7a5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:59146->10.217.0.54:8443: read: connection reset by peer" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.783084 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.784165 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7trsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-8lc8b_openshift-marketplace(47a26ac3-524b-47c5-abb8-d2c4837659e7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.785371 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-8lc8b" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.866570 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.866817 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-92w27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m75r4_openshift-marketplace(8b7db9a0-58a5-496b-826c-6e64920151b8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 27 06:49:38 crc kubenswrapper[4796]: E0127 06:49:38.869010 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-m75r4" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.012448 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-748b7b7cd9-rr79d_ede723b6-b6d8-4b81-aa39-47241110f7a5/route-controller-manager/0.log" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.012605 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.045421 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9"] Jan 27 06:49:39 crc kubenswrapper[4796]: E0127 06:49:39.045727 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede723b6-b6d8-4b81-aa39-47241110f7a5" containerName="route-controller-manager" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.045743 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede723b6-b6d8-4b81-aa39-47241110f7a5" containerName="route-controller-manager" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.045869 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede723b6-b6d8-4b81-aa39-47241110f7a5" containerName="route-controller-manager" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.046324 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.053101 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd85q\" (UniqueName: \"kubernetes.io/projected/ede723b6-b6d8-4b81-aa39-47241110f7a5-kube-api-access-qd85q\") pod \"ede723b6-b6d8-4b81-aa39-47241110f7a5\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.053206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede723b6-b6d8-4b81-aa39-47241110f7a5-serving-cert\") pod \"ede723b6-b6d8-4b81-aa39-47241110f7a5\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.053250 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-config\") pod \"ede723b6-b6d8-4b81-aa39-47241110f7a5\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.053282 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-client-ca\") pod \"ede723b6-b6d8-4b81-aa39-47241110f7a5\" (UID: \"ede723b6-b6d8-4b81-aa39-47241110f7a5\") " Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.054603 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9"] Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.054671 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-client-ca" (OuterVolumeSpecName: "client-ca") pod "ede723b6-b6d8-4b81-aa39-47241110f7a5" (UID: "ede723b6-b6d8-4b81-aa39-47241110f7a5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.056115 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-config" (OuterVolumeSpecName: "config") pod "ede723b6-b6d8-4b81-aa39-47241110f7a5" (UID: "ede723b6-b6d8-4b81-aa39-47241110f7a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.060391 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede723b6-b6d8-4b81-aa39-47241110f7a5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ede723b6-b6d8-4b81-aa39-47241110f7a5" (UID: "ede723b6-b6d8-4b81-aa39-47241110f7a5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.061198 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede723b6-b6d8-4b81-aa39-47241110f7a5-kube-api-access-qd85q" (OuterVolumeSpecName: "kube-api-access-qd85q") pod "ede723b6-b6d8-4b81-aa39-47241110f7a5" (UID: "ede723b6-b6d8-4b81-aa39-47241110f7a5"). InnerVolumeSpecName "kube-api-access-qd85q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.155310 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-client-ca\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.155473 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-config\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.155505 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-serving-cert\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.155595 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc7lw\" (UniqueName: \"kubernetes.io/projected/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-kube-api-access-lc7lw\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.155642 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ede723b6-b6d8-4b81-aa39-47241110f7a5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.155656 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.155685 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ede723b6-b6d8-4b81-aa39-47241110f7a5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.155707 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd85q\" (UniqueName: \"kubernetes.io/projected/ede723b6-b6d8-4b81-aa39-47241110f7a5-kube-api-access-qd85q\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.210785 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.211467 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.228561 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.258137 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc7lw\" (UniqueName: \"kubernetes.io/projected/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-kube-api-access-lc7lw\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.258181 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-client-ca\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.258227 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.258274 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-var-lock\") pod \"installer-9-crc\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.258304 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6856fd8-b28c-489f-a672-e05777061280-kube-api-access\") pod \"installer-9-crc\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.258335 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-config\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.258357 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-serving-cert\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.260296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-config\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.260633 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-client-ca\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.271209 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-serving-cert\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.275791 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc7lw\" (UniqueName: \"kubernetes.io/projected/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-kube-api-access-lc7lw\") pod \"route-controller-manager-79c58b57df-5s6q9\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.360109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.360197 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-var-lock\") pod \"installer-9-crc\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.360211 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.360424 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6856fd8-b28c-489f-a672-e05777061280-kube-api-access\") pod \"installer-9-crc\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.360456 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-var-lock\") pod \"installer-9-crc\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.376334 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.381084 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6856fd8-b28c-489f-a672-e05777061280-kube-api-access\") pod \"installer-9-crc\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.472018 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.524592 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.550360 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-748b7b7cd9-rr79d_ede723b6-b6d8-4b81-aa39-47241110f7a5/route-controller-manager/0.log" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.550405 4796 generic.go:334] "Generic (PLEG): container finished" podID="ede723b6-b6d8-4b81-aa39-47241110f7a5" containerID="40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256" exitCode=255 Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.550474 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" event={"ID":"ede723b6-b6d8-4b81-aa39-47241110f7a5","Type":"ContainerDied","Data":"40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256"} Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.550502 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" event={"ID":"ede723b6-b6d8-4b81-aa39-47241110f7a5","Type":"ContainerDied","Data":"2ad2120310472dcdcd25a0943bb1b69be6583b063e217154a031d1fe7ee2b4bc"} Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.550497 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.550517 4796 scope.go:117] "RemoveContainer" containerID="40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.556330 4796 generic.go:334] "Generic (PLEG): container finished" podID="53b2fdaf-6df8-46ab-bdeb-55b932b32d54" containerID="7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a" exitCode=0 Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.556382 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" event={"ID":"53b2fdaf-6df8-46ab-bdeb-55b932b32d54","Type":"ContainerDied","Data":"7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a"} Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.556392 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.556400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2" event={"ID":"53b2fdaf-6df8-46ab-bdeb-55b932b32d54","Type":"ContainerDied","Data":"10039273fcc68d3db0a37b0b8caa430209ed6f85301c36adc688e4879bc0866d"} Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.558759 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gvx56" event={"ID":"bb6e3d41-bfb7-4d81-a8c9-e6ad82af9a09","Type":"ContainerStarted","Data":"703f845806b2f9fe9d292639f4c7aa6a6c7f28750f785824d70257babdcb88a3"} Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.560602 4796 generic.go:334] "Generic (PLEG): container finished" podID="3cb2188d-d559-4429-9997-5e7729ecc05b" containerID="5783b7d490e2b49d23c718bcb4a31c129213f0385f44a21c6b0b8dd04eff54d5" exitCode=0 Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.560669 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3cb2188d-d559-4429-9997-5e7729ecc05b","Type":"ContainerDied","Data":"5783b7d490e2b49d23c718bcb4a31c129213f0385f44a21c6b0b8dd04eff54d5"} Jan 27 06:49:39 crc kubenswrapper[4796]: E0127 06:49:39.563804 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-8lc8b" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" Jan 27 06:49:39 crc kubenswrapper[4796]: E0127 06:49:39.563896 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m75r4" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.568317 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-config\") pod \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.568386 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-proxy-ca-bundles\") pod \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.568450 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-serving-cert\") pod \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.568496 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-client-ca\") pod \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.568582 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfdqc\" (UniqueName: \"kubernetes.io/projected/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-kube-api-access-tfdqc\") pod \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\" (UID: \"53b2fdaf-6df8-46ab-bdeb-55b932b32d54\") " Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.570103 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-config" (OuterVolumeSpecName: "config") pod "53b2fdaf-6df8-46ab-bdeb-55b932b32d54" (UID: "53b2fdaf-6df8-46ab-bdeb-55b932b32d54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.574795 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "53b2fdaf-6df8-46ab-bdeb-55b932b32d54" (UID: "53b2fdaf-6df8-46ab-bdeb-55b932b32d54"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.574924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-kube-api-access-tfdqc" (OuterVolumeSpecName: "kube-api-access-tfdqc") pod "53b2fdaf-6df8-46ab-bdeb-55b932b32d54" (UID: "53b2fdaf-6df8-46ab-bdeb-55b932b32d54"). InnerVolumeSpecName "kube-api-access-tfdqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.578817 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gvx56" podStartSLOduration=172.578794379 podStartE2EDuration="2m52.578794379s" podCreationTimestamp="2026-01-27 06:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:39.575287198 +0000 UTC m=+200.682254525" watchObservedRunningTime="2026-01-27 06:49:39.578794379 +0000 UTC m=+200.685761716" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.580610 4796 scope.go:117] "RemoveContainer" containerID="40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.581194 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-client-ca" (OuterVolumeSpecName: "client-ca") pod "53b2fdaf-6df8-46ab-bdeb-55b932b32d54" (UID: "53b2fdaf-6df8-46ab-bdeb-55b932b32d54"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.581868 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "53b2fdaf-6df8-46ab-bdeb-55b932b32d54" (UID: "53b2fdaf-6df8-46ab-bdeb-55b932b32d54"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:49:39 crc kubenswrapper[4796]: E0127 06:49:39.582811 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256\": container with ID starting with 40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256 not found: ID does not exist" containerID="40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.582851 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256"} err="failed to get container status \"40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256\": rpc error: code = NotFound desc = could not find container \"40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256\": container with ID starting with 40adb49aa43c6cf3cda1421e24e8dba7ee8757b6ca77883c91247800818e5256 not found: ID does not exist" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.582931 4796 scope.go:117] "RemoveContainer" containerID="7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.590146 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d"] Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.593125 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748b7b7cd9-rr79d"] Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.624993 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9"] Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.645343 4796 scope.go:117] "RemoveContainer" containerID="7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a" Jan 27 06:49:39 crc kubenswrapper[4796]: E0127 06:49:39.646581 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a\": container with ID starting with 7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a not found: ID does not exist" containerID="7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.646641 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a"} err="failed to get container status \"7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a\": rpc error: code = NotFound desc = could not find container \"7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a\": container with ID starting with 7151fdfe69496e7e83d8359771174de05a333ad19d14decc90f059ecbe7edd5a not found: ID does not exist" Jan 27 06:49:39 crc kubenswrapper[4796]: W0127 06:49:39.653331 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod052d19d2_cd0c_4ddb_bc61_b5c40e0fba95.slice/crio-e97b97ed32bf998da5a29d3a8741adc765a94ee0a4961b95faf4ab6359e10fe9 WatchSource:0}: Error finding container e97b97ed32bf998da5a29d3a8741adc765a94ee0a4961b95faf4ab6359e10fe9: Status 404 returned error can't find the container with id e97b97ed32bf998da5a29d3a8741adc765a94ee0a4961b95faf4ab6359e10fe9 Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.670045 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.670079 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.670095 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.670107 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.670118 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfdqc\" (UniqueName: \"kubernetes.io/projected/53b2fdaf-6df8-46ab-bdeb-55b932b32d54-kube-api-access-tfdqc\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.786174 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.889607 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2"] Jan 27 06:49:39 crc kubenswrapper[4796]: I0127 06:49:39.892514 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-9gdn2"] Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.570613 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" event={"ID":"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95","Type":"ContainerStarted","Data":"9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162"} Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.570747 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.570797 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" event={"ID":"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95","Type":"ContainerStarted","Data":"e97b97ed32bf998da5a29d3a8741adc765a94ee0a4961b95faf4ab6359e10fe9"} Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.578065 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c6856fd8-b28c-489f-a672-e05777061280","Type":"ContainerStarted","Data":"708c66deac3d4231f36f894cb4c8673bc8111e4cd56f285324f7273e428672c7"} Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.578140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c6856fd8-b28c-489f-a672-e05777061280","Type":"ContainerStarted","Data":"85c359559b64b24d78a3bcfb96a84340b55079c537d4c9f3306687668b3b9c0e"} Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.578395 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.603601 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" podStartSLOduration=9.603578893 podStartE2EDuration="9.603578893s" podCreationTimestamp="2026-01-27 06:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:40.587218166 +0000 UTC m=+201.694185503" watchObservedRunningTime="2026-01-27 06:49:40.603578893 +0000 UTC m=+201.710546220" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.624393 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.624366406 podStartE2EDuration="1.624366406s" podCreationTimestamp="2026-01-27 06:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:40.616062369 +0000 UTC m=+201.723029706" watchObservedRunningTime="2026-01-27 06:49:40.624366406 +0000 UTC m=+201.731333743" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.756805 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53b2fdaf-6df8-46ab-bdeb-55b932b32d54" path="/var/lib/kubelet/pods/53b2fdaf-6df8-46ab-bdeb-55b932b32d54/volumes" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.757656 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede723b6-b6d8-4b81-aa39-47241110f7a5" path="/var/lib/kubelet/pods/ede723b6-b6d8-4b81-aa39-47241110f7a5/volumes" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.809353 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.884266 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cb2188d-d559-4429-9997-5e7729ecc05b-kube-api-access\") pod \"3cb2188d-d559-4429-9997-5e7729ecc05b\" (UID: \"3cb2188d-d559-4429-9997-5e7729ecc05b\") " Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.884378 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cb2188d-d559-4429-9997-5e7729ecc05b-kubelet-dir\") pod \"3cb2188d-d559-4429-9997-5e7729ecc05b\" (UID: \"3cb2188d-d559-4429-9997-5e7729ecc05b\") " Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.884654 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3cb2188d-d559-4429-9997-5e7729ecc05b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3cb2188d-d559-4429-9997-5e7729ecc05b" (UID: "3cb2188d-d559-4429-9997-5e7729ecc05b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.891729 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb2188d-d559-4429-9997-5e7729ecc05b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3cb2188d-d559-4429-9997-5e7729ecc05b" (UID: "3cb2188d-d559-4429-9997-5e7729ecc05b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.985872 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3cb2188d-d559-4429-9997-5e7729ecc05b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:40 crc kubenswrapper[4796]: I0127 06:49:40.986242 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3cb2188d-d559-4429-9997-5e7729ecc05b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.586983 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3cb2188d-d559-4429-9997-5e7729ecc05b","Type":"ContainerDied","Data":"93bdd8f081fc1f1a9b1d30086e88fc50c2454f20d5ee05639133af5c01259b4a"} Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.587049 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93bdd8f081fc1f1a9b1d30086e88fc50c2454f20d5ee05639133af5c01259b4a" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.587928 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.985710 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5ccbdccf-jmflt"] Jan 27 06:49:41 crc kubenswrapper[4796]: E0127 06:49:41.986097 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b2fdaf-6df8-46ab-bdeb-55b932b32d54" containerName="controller-manager" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.986120 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b2fdaf-6df8-46ab-bdeb-55b932b32d54" containerName="controller-manager" Jan 27 06:49:41 crc kubenswrapper[4796]: E0127 06:49:41.986131 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb2188d-d559-4429-9997-5e7729ecc05b" containerName="pruner" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.986139 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb2188d-d559-4429-9997-5e7729ecc05b" containerName="pruner" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.986276 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b2fdaf-6df8-46ab-bdeb-55b932b32d54" containerName="controller-manager" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.986297 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb2188d-d559-4429-9997-5e7729ecc05b" containerName="pruner" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.986795 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.988517 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.989119 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.989484 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.989683 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.989732 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.989971 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:49:41 crc kubenswrapper[4796]: I0127 06:49:41.995668 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5ccbdccf-jmflt"] Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.001247 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.099313 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-serving-cert\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.099359 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-config\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.099390 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-proxy-ca-bundles\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.099416 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-client-ca\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.099443 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgbhh\" (UniqueName: \"kubernetes.io/projected/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-kube-api-access-xgbhh\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.200472 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgbhh\" (UniqueName: \"kubernetes.io/projected/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-kube-api-access-xgbhh\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.200942 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-serving-cert\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.200975 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-config\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.201014 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-proxy-ca-bundles\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.201044 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-client-ca\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.202746 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-proxy-ca-bundles\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.202949 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-config\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.203878 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-client-ca\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.208042 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-serving-cert\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.219037 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgbhh\" (UniqueName: \"kubernetes.io/projected/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-kube-api-access-xgbhh\") pod \"controller-manager-f5ccbdccf-jmflt\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.321255 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.519670 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5ccbdccf-jmflt"] Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.608645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rz8s" event={"ID":"3a16a514-cf37-4eb5-a775-6dc2573704cf","Type":"ContainerStarted","Data":"354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d"} Jan 27 06:49:42 crc kubenswrapper[4796]: I0127 06:49:42.612168 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" event={"ID":"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60","Type":"ContainerStarted","Data":"95d3f0be3f81568ddc2dff51115f91b35c7ce374e3e5cbe5e0c81fd281d34c91"} Jan 27 06:49:43 crc kubenswrapper[4796]: I0127 06:49:43.622258 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" event={"ID":"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60","Type":"ContainerStarted","Data":"0d1c5f795317c1a7df695e3f5d16ded69c5c5bd68ebc1f4d72f3452496dffd61"} Jan 27 06:49:43 crc kubenswrapper[4796]: I0127 06:49:43.624300 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:43 crc kubenswrapper[4796]: I0127 06:49:43.628491 4796 generic.go:334] "Generic (PLEG): container finished" podID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerID="354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d" exitCode=0 Jan 27 06:49:43 crc kubenswrapper[4796]: I0127 06:49:43.628357 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:49:43 crc kubenswrapper[4796]: I0127 06:49:43.628738 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rz8s" event={"ID":"3a16a514-cf37-4eb5-a775-6dc2573704cf","Type":"ContainerDied","Data":"354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d"} Jan 27 06:49:43 crc kubenswrapper[4796]: I0127 06:49:43.672449 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" podStartSLOduration=12.672421563 podStartE2EDuration="12.672421563s" podCreationTimestamp="2026-01-27 06:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:49:43.649257988 +0000 UTC m=+204.756225405" watchObservedRunningTime="2026-01-27 06:49:43.672421563 +0000 UTC m=+204.779388890" Jan 27 06:49:44 crc kubenswrapper[4796]: I0127 06:49:44.640874 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rz8s" event={"ID":"3a16a514-cf37-4eb5-a775-6dc2573704cf","Type":"ContainerStarted","Data":"1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd"} Jan 27 06:49:44 crc kubenswrapper[4796]: I0127 06:49:44.671395 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9rz8s" podStartSLOduration=4.447902172 podStartE2EDuration="52.671375183s" podCreationTimestamp="2026-01-27 06:48:52 +0000 UTC" firstStartedPulling="2026-01-27 06:48:55.815770761 +0000 UTC m=+156.922738088" lastFinishedPulling="2026-01-27 06:49:44.039243772 +0000 UTC m=+205.146211099" observedRunningTime="2026-01-27 06:49:44.668426146 +0000 UTC m=+205.775393483" watchObservedRunningTime="2026-01-27 06:49:44.671375183 +0000 UTC m=+205.778342510" Jan 27 06:49:46 crc kubenswrapper[4796]: I0127 06:49:46.657174 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn94j" event={"ID":"ae643542-ca5e-4cee-aaba-818f3d424763","Type":"ContainerStarted","Data":"93e30ccba62222b7c0be36caeb46537aae4409941ddfafbb8c2842795a723677"} Jan 27 06:49:47 crc kubenswrapper[4796]: I0127 06:49:47.666711 4796 generic.go:334] "Generic (PLEG): container finished" podID="ae643542-ca5e-4cee-aaba-818f3d424763" containerID="93e30ccba62222b7c0be36caeb46537aae4409941ddfafbb8c2842795a723677" exitCode=0 Jan 27 06:49:47 crc kubenswrapper[4796]: I0127 06:49:47.666785 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn94j" event={"ID":"ae643542-ca5e-4cee-aaba-818f3d424763","Type":"ContainerDied","Data":"93e30ccba62222b7c0be36caeb46537aae4409941ddfafbb8c2842795a723677"} Jan 27 06:49:48 crc kubenswrapper[4796]: I0127 06:49:48.673877 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn94j" event={"ID":"ae643542-ca5e-4cee-aaba-818f3d424763","Type":"ContainerStarted","Data":"b69e0f63a0b8ef774051df55b3c95eb1838265c411b21561b246fea52b63892a"} Jan 27 06:49:48 crc kubenswrapper[4796]: I0127 06:49:48.696031 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mn94j" podStartSLOduration=3.613787676 podStartE2EDuration="53.696011677s" podCreationTimestamp="2026-01-27 06:48:55 +0000 UTC" firstStartedPulling="2026-01-27 06:48:58.122609622 +0000 UTC m=+159.229576949" lastFinishedPulling="2026-01-27 06:49:48.204833613 +0000 UTC m=+209.311800950" observedRunningTime="2026-01-27 06:49:48.694735464 +0000 UTC m=+209.801702791" watchObservedRunningTime="2026-01-27 06:49:48.696011677 +0000 UTC m=+209.802979004" Jan 27 06:49:49 crc kubenswrapper[4796]: I0127 06:49:49.684528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r59" event={"ID":"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae","Type":"ContainerStarted","Data":"fc92cc306f7f9c5e1c65787d5dfdbeec683cde2a5f06c24dfd3927719587931d"} Jan 27 06:49:49 crc kubenswrapper[4796]: I0127 06:49:49.688711 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pn7g" event={"ID":"5397ad23-1136-4577-b821-4199914b8582","Type":"ContainerStarted","Data":"d553bdb47e3bbb9da2ec6f637adabd2e9da600d8a650c1b6adb854304c95c715"} Jan 27 06:49:50 crc kubenswrapper[4796]: I0127 06:49:50.716907 4796 generic.go:334] "Generic (PLEG): container finished" podID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerID="fc92cc306f7f9c5e1c65787d5dfdbeec683cde2a5f06c24dfd3927719587931d" exitCode=0 Jan 27 06:49:50 crc kubenswrapper[4796]: I0127 06:49:50.717184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r59" event={"ID":"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae","Type":"ContainerDied","Data":"fc92cc306f7f9c5e1c65787d5dfdbeec683cde2a5f06c24dfd3927719587931d"} Jan 27 06:49:50 crc kubenswrapper[4796]: I0127 06:49:50.729083 4796 generic.go:334] "Generic (PLEG): container finished" podID="5397ad23-1136-4577-b821-4199914b8582" containerID="d553bdb47e3bbb9da2ec6f637adabd2e9da600d8a650c1b6adb854304c95c715" exitCode=0 Jan 27 06:49:50 crc kubenswrapper[4796]: I0127 06:49:50.729218 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pn7g" event={"ID":"5397ad23-1136-4577-b821-4199914b8582","Type":"ContainerDied","Data":"d553bdb47e3bbb9da2ec6f637adabd2e9da600d8a650c1b6adb854304c95c715"} Jan 27 06:49:50 crc kubenswrapper[4796]: I0127 06:49:50.734375 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75r4" event={"ID":"8b7db9a0-58a5-496b-826c-6e64920151b8","Type":"ContainerStarted","Data":"5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a"} Jan 27 06:49:51 crc kubenswrapper[4796]: I0127 06:49:51.745772 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pn7g" event={"ID":"5397ad23-1136-4577-b821-4199914b8582","Type":"ContainerStarted","Data":"4c64caeb56ac1525a7a23fe5f8bd71165656c8bc814737ab699ed79715c96fdc"} Jan 27 06:49:51 crc kubenswrapper[4796]: I0127 06:49:51.754905 4796 generic.go:334] "Generic (PLEG): container finished" podID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerID="5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a" exitCode=0 Jan 27 06:49:51 crc kubenswrapper[4796]: I0127 06:49:51.754981 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75r4" event={"ID":"8b7db9a0-58a5-496b-826c-6e64920151b8","Type":"ContainerDied","Data":"5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a"} Jan 27 06:49:51 crc kubenswrapper[4796]: I0127 06:49:51.758450 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r59" event={"ID":"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae","Type":"ContainerStarted","Data":"145684e338d0a3acbab02ae51c8304b1314dc11007f526099b98c4089bde69f8"} Jan 27 06:49:51 crc kubenswrapper[4796]: I0127 06:49:51.769249 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2pn7g" podStartSLOduration=4.606869884 podStartE2EDuration="57.76922305s" podCreationTimestamp="2026-01-27 06:48:54 +0000 UTC" firstStartedPulling="2026-01-27 06:48:58.109699804 +0000 UTC m=+159.216667131" lastFinishedPulling="2026-01-27 06:49:51.27205293 +0000 UTC m=+212.379020297" observedRunningTime="2026-01-27 06:49:51.768279055 +0000 UTC m=+212.875246382" watchObservedRunningTime="2026-01-27 06:49:51.76922305 +0000 UTC m=+212.876190377" Jan 27 06:49:51 crc kubenswrapper[4796]: I0127 06:49:51.825550 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j8r59" podStartSLOduration=4.449926655 podStartE2EDuration="59.825512069s" podCreationTimestamp="2026-01-27 06:48:52 +0000 UTC" firstStartedPulling="2026-01-27 06:48:55.738720321 +0000 UTC m=+156.845687648" lastFinishedPulling="2026-01-27 06:49:51.114305735 +0000 UTC m=+212.221273062" observedRunningTime="2026-01-27 06:49:51.795017863 +0000 UTC m=+212.901985180" watchObservedRunningTime="2026-01-27 06:49:51.825512069 +0000 UTC m=+212.932479396" Jan 27 06:49:52 crc kubenswrapper[4796]: I0127 06:49:52.762399 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:49:52 crc kubenswrapper[4796]: I0127 06:49:52.762760 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:49:52 crc kubenswrapper[4796]: I0127 06:49:52.770911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75r4" event={"ID":"8b7db9a0-58a5-496b-826c-6e64920151b8","Type":"ContainerStarted","Data":"5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d"} Jan 27 06:49:52 crc kubenswrapper[4796]: I0127 06:49:52.794252 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m75r4" podStartSLOduration=3.896177219 podStartE2EDuration="1m0.794233951s" podCreationTimestamp="2026-01-27 06:48:52 +0000 UTC" firstStartedPulling="2026-01-27 06:48:55.626694488 +0000 UTC m=+156.733661815" lastFinishedPulling="2026-01-27 06:49:52.52475122 +0000 UTC m=+213.631718547" observedRunningTime="2026-01-27 06:49:52.793143912 +0000 UTC m=+213.900111239" watchObservedRunningTime="2026-01-27 06:49:52.794233951 +0000 UTC m=+213.901201278" Jan 27 06:49:52 crc kubenswrapper[4796]: I0127 06:49:52.968692 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:49:53 crc kubenswrapper[4796]: I0127 06:49:53.226994 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:49:53 crc kubenswrapper[4796]: I0127 06:49:53.227050 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:49:53 crc kubenswrapper[4796]: I0127 06:49:53.404117 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:49:53 crc kubenswrapper[4796]: I0127 06:49:53.405328 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:49:53 crc kubenswrapper[4796]: I0127 06:49:53.436454 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:49:53 crc kubenswrapper[4796]: I0127 06:49:53.781010 4796 generic.go:334] "Generic (PLEG): container finished" podID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerID="b8b93df8d9345c3ebb46c71e301f48962cbfa22768612e1d4fe069614fb27259" exitCode=0 Jan 27 06:49:53 crc kubenswrapper[4796]: I0127 06:49:53.781087 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lc8b" event={"ID":"47a26ac3-524b-47c5-abb8-d2c4837659e7","Type":"ContainerDied","Data":"b8b93df8d9345c3ebb46c71e301f48962cbfa22768612e1d4fe069614fb27259"} Jan 27 06:49:53 crc kubenswrapper[4796]: I0127 06:49:53.787250 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cv5hm" event={"ID":"242ef06f-796a-4c77-810b-bde4a5fbc087","Type":"ContainerStarted","Data":"cf3074d64e7ca8c6d48cd194a94ed43a140975694438fefcb4a2458a4197c9f4"} Jan 27 06:49:53 crc kubenswrapper[4796]: I0127 06:49:53.851893 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:49:54 crc kubenswrapper[4796]: I0127 06:49:54.262071 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-m75r4" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerName="registry-server" probeResult="failure" output=< Jan 27 06:49:54 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Jan 27 06:49:54 crc kubenswrapper[4796]: > Jan 27 06:49:54 crc kubenswrapper[4796]: I0127 06:49:54.796345 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lc8b" event={"ID":"47a26ac3-524b-47c5-abb8-d2c4837659e7","Type":"ContainerStarted","Data":"fbbdae8cd5e56f50a5c6abb80372e700e4368c3bf10f314c2dc207f5499c2527"} Jan 27 06:49:54 crc kubenswrapper[4796]: I0127 06:49:54.798452 4796 generic.go:334] "Generic (PLEG): container finished" podID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerID="cf3074d64e7ca8c6d48cd194a94ed43a140975694438fefcb4a2458a4197c9f4" exitCode=0 Jan 27 06:49:54 crc kubenswrapper[4796]: I0127 06:49:54.798567 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cv5hm" event={"ID":"242ef06f-796a-4c77-810b-bde4a5fbc087","Type":"ContainerDied","Data":"cf3074d64e7ca8c6d48cd194a94ed43a140975694438fefcb4a2458a4197c9f4"} Jan 27 06:49:54 crc kubenswrapper[4796]: I0127 06:49:54.814669 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8lc8b" podStartSLOduration=3.437026136 podStartE2EDuration="1m0.814650149s" podCreationTimestamp="2026-01-27 06:48:54 +0000 UTC" firstStartedPulling="2026-01-27 06:48:56.896754691 +0000 UTC m=+158.003722018" lastFinishedPulling="2026-01-27 06:49:54.274378704 +0000 UTC m=+215.381346031" observedRunningTime="2026-01-27 06:49:54.811501636 +0000 UTC m=+215.918468963" watchObservedRunningTime="2026-01-27 06:49:54.814650149 +0000 UTC m=+215.921617476" Jan 27 06:49:55 crc kubenswrapper[4796]: I0127 06:49:55.121254 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:49:55 crc kubenswrapper[4796]: I0127 06:49:55.121331 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:49:55 crc kubenswrapper[4796]: I0127 06:49:55.408922 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:49:55 crc kubenswrapper[4796]: I0127 06:49:55.408968 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:49:55 crc kubenswrapper[4796]: I0127 06:49:55.931644 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:49:55 crc kubenswrapper[4796]: I0127 06:49:55.931983 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:49:55 crc kubenswrapper[4796]: I0127 06:49:55.983601 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:49:55 crc kubenswrapper[4796]: I0127 06:49:55.987391 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:49:56 crc kubenswrapper[4796]: I0127 06:49:56.172743 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8lc8b" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerName="registry-server" probeResult="failure" output=< Jan 27 06:49:56 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Jan 27 06:49:56 crc kubenswrapper[4796]: > Jan 27 06:49:56 crc kubenswrapper[4796]: I0127 06:49:56.808331 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rz8s"] Jan 27 06:49:56 crc kubenswrapper[4796]: I0127 06:49:56.810947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrm5z" event={"ID":"e6f0ca0a-ff9c-4420-92bb-517ca68b906c","Type":"ContainerStarted","Data":"51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89"} Jan 27 06:49:56 crc kubenswrapper[4796]: I0127 06:49:56.814014 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cv5hm" event={"ID":"242ef06f-796a-4c77-810b-bde4a5fbc087","Type":"ContainerStarted","Data":"0eeed9873078fe1ad09ba89d37c1845d14cf24e8f115de55365945b20d7ce3a3"} Jan 27 06:49:56 crc kubenswrapper[4796]: I0127 06:49:56.814386 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9rz8s" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerName="registry-server" containerID="cri-o://1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd" gracePeriod=2 Jan 27 06:49:56 crc kubenswrapper[4796]: I0127 06:49:56.863949 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:49:56 crc kubenswrapper[4796]: I0127 06:49:56.880988 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cv5hm" podStartSLOduration=4.340630235 podStartE2EDuration="1m4.880971205s" podCreationTimestamp="2026-01-27 06:48:52 +0000 UTC" firstStartedPulling="2026-01-27 06:48:55.669596888 +0000 UTC m=+156.776564215" lastFinishedPulling="2026-01-27 06:49:56.209937858 +0000 UTC m=+217.316905185" observedRunningTime="2026-01-27 06:49:56.864235698 +0000 UTC m=+217.971203045" watchObservedRunningTime="2026-01-27 06:49:56.880971205 +0000 UTC m=+217.987938522" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.241772 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.414072 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-utilities\") pod \"3a16a514-cf37-4eb5-a775-6dc2573704cf\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.414132 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-catalog-content\") pod \"3a16a514-cf37-4eb5-a775-6dc2573704cf\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.414176 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47524\" (UniqueName: \"kubernetes.io/projected/3a16a514-cf37-4eb5-a775-6dc2573704cf-kube-api-access-47524\") pod \"3a16a514-cf37-4eb5-a775-6dc2573704cf\" (UID: \"3a16a514-cf37-4eb5-a775-6dc2573704cf\") " Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.415127 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-utilities" (OuterVolumeSpecName: "utilities") pod "3a16a514-cf37-4eb5-a775-6dc2573704cf" (UID: "3a16a514-cf37-4eb5-a775-6dc2573704cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.420624 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a16a514-cf37-4eb5-a775-6dc2573704cf-kube-api-access-47524" (OuterVolumeSpecName: "kube-api-access-47524") pod "3a16a514-cf37-4eb5-a775-6dc2573704cf" (UID: "3a16a514-cf37-4eb5-a775-6dc2573704cf"). InnerVolumeSpecName "kube-api-access-47524". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.461271 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a16a514-cf37-4eb5-a775-6dc2573704cf" (UID: "3a16a514-cf37-4eb5-a775-6dc2573704cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.515486 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.515519 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a16a514-cf37-4eb5-a775-6dc2573704cf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.515546 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47524\" (UniqueName: \"kubernetes.io/projected/3a16a514-cf37-4eb5-a775-6dc2573704cf-kube-api-access-47524\") on node \"crc\" DevicePath \"\"" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.820205 4796 generic.go:334] "Generic (PLEG): container finished" podID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerID="51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89" exitCode=0 Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.820291 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrm5z" event={"ID":"e6f0ca0a-ff9c-4420-92bb-517ca68b906c","Type":"ContainerDied","Data":"51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89"} Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.824634 4796 generic.go:334] "Generic (PLEG): container finished" podID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerID="1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd" exitCode=0 Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.824890 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9rz8s" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.825238 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rz8s" event={"ID":"3a16a514-cf37-4eb5-a775-6dc2573704cf","Type":"ContainerDied","Data":"1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd"} Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.825268 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9rz8s" event={"ID":"3a16a514-cf37-4eb5-a775-6dc2573704cf","Type":"ContainerDied","Data":"9a748d2d3a84faf782d3f04ba209723d6e85056d78524ea6ad140b17fc322a78"} Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.825289 4796 scope.go:117] "RemoveContainer" containerID="1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.840974 4796 scope.go:117] "RemoveContainer" containerID="354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.862076 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9rz8s"] Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.865015 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9rz8s"] Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.874155 4796 scope.go:117] "RemoveContainer" containerID="47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.888179 4796 scope.go:117] "RemoveContainer" containerID="1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd" Jan 27 06:49:57 crc kubenswrapper[4796]: E0127 06:49:57.888789 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd\": container with ID starting with 1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd not found: ID does not exist" containerID="1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.888821 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd"} err="failed to get container status \"1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd\": rpc error: code = NotFound desc = could not find container \"1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd\": container with ID starting with 1a2d092a21e6a34b19a88f51f9525c9d441e6c677aa00b95280916d3cf2260fd not found: ID does not exist" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.888842 4796 scope.go:117] "RemoveContainer" containerID="354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d" Jan 27 06:49:57 crc kubenswrapper[4796]: E0127 06:49:57.889350 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d\": container with ID starting with 354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d not found: ID does not exist" containerID="354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.889404 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d"} err="failed to get container status \"354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d\": rpc error: code = NotFound desc = could not find container \"354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d\": container with ID starting with 354118ff1fe09861b74c8171d2b6350adb21c27ee48a5971fdcce2ccee3a996d not found: ID does not exist" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.889440 4796 scope.go:117] "RemoveContainer" containerID="47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed" Jan 27 06:49:57 crc kubenswrapper[4796]: E0127 06:49:57.889782 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed\": container with ID starting with 47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed not found: ID does not exist" containerID="47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed" Jan 27 06:49:57 crc kubenswrapper[4796]: I0127 06:49:57.889842 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed"} err="failed to get container status \"47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed\": rpc error: code = NotFound desc = could not find container \"47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed\": container with ID starting with 47ab2bb5c017ea62930e9e81580c46fb691bb90d97f16740c9e945ba9c50c6ed not found: ID does not exist" Jan 27 06:49:58 crc kubenswrapper[4796]: I0127 06:49:58.754053 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" path="/var/lib/kubelet/pods/3a16a514-cf37-4eb5-a775-6dc2573704cf/volumes" Jan 27 06:50:01 crc kubenswrapper[4796]: I0127 06:50:01.847699 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrm5z" event={"ID":"e6f0ca0a-ff9c-4420-92bb-517ca68b906c","Type":"ContainerStarted","Data":"2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba"} Jan 27 06:50:01 crc kubenswrapper[4796]: I0127 06:50:01.864745 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vrm5z" podStartSLOduration=4.720891806 podStartE2EDuration="1m6.864719739s" podCreationTimestamp="2026-01-27 06:48:55 +0000 UTC" firstStartedPulling="2026-01-27 06:48:58.103525013 +0000 UTC m=+159.210492340" lastFinishedPulling="2026-01-27 06:50:00.247352946 +0000 UTC m=+221.354320273" observedRunningTime="2026-01-27 06:50:01.862528982 +0000 UTC m=+222.969496309" watchObservedRunningTime="2026-01-27 06:50:01.864719739 +0000 UTC m=+222.971687106" Jan 27 06:50:02 crc kubenswrapper[4796]: I0127 06:50:02.815462 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.075409 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.075576 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.119771 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.265395 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.308598 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.788294 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.788368 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.788428 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.789111 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2"} pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.789189 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" containerID="cri-o://e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2" gracePeriod=600 Jan 27 06:50:03 crc kubenswrapper[4796]: I0127 06:50:03.919200 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:50:04 crc kubenswrapper[4796]: I0127 06:50:04.882045 4796 generic.go:334] "Generic (PLEG): container finished" podID="84d7512b-555d-440a-b817-deb8ba12f61d" containerID="e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2" exitCode=0 Jan 27 06:50:04 crc kubenswrapper[4796]: I0127 06:50:04.882147 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerDied","Data":"e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2"} Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.164391 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.207446 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m75r4"] Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.207807 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m75r4" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerName="registry-server" containerID="cri-o://5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d" gracePeriod=2 Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.219524 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.462604 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.655659 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.825027 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-catalog-content\") pod \"8b7db9a0-58a5-496b-826c-6e64920151b8\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.825379 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-utilities\") pod \"8b7db9a0-58a5-496b-826c-6e64920151b8\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.825429 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92w27\" (UniqueName: \"kubernetes.io/projected/8b7db9a0-58a5-496b-826c-6e64920151b8-kube-api-access-92w27\") pod \"8b7db9a0-58a5-496b-826c-6e64920151b8\" (UID: \"8b7db9a0-58a5-496b-826c-6e64920151b8\") " Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.826086 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-utilities" (OuterVolumeSpecName: "utilities") pod "8b7db9a0-58a5-496b-826c-6e64920151b8" (UID: "8b7db9a0-58a5-496b-826c-6e64920151b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.830185 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b7db9a0-58a5-496b-826c-6e64920151b8-kube-api-access-92w27" (OuterVolumeSpecName: "kube-api-access-92w27") pod "8b7db9a0-58a5-496b-826c-6e64920151b8" (UID: "8b7db9a0-58a5-496b-826c-6e64920151b8"). InnerVolumeSpecName "kube-api-access-92w27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.875309 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b7db9a0-58a5-496b-826c-6e64920151b8" (UID: "8b7db9a0-58a5-496b-826c-6e64920151b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.891455 4796 generic.go:334] "Generic (PLEG): container finished" podID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerID="5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d" exitCode=0 Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.891561 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75r4" event={"ID":"8b7db9a0-58a5-496b-826c-6e64920151b8","Type":"ContainerDied","Data":"5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d"} Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.891591 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m75r4" event={"ID":"8b7db9a0-58a5-496b-826c-6e64920151b8","Type":"ContainerDied","Data":"eb489537ef135147b1f8b5124e263e752fdd1ae3fd619e665eddc9df7c02f392"} Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.891590 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m75r4" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.891607 4796 scope.go:117] "RemoveContainer" containerID="5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.895504 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"0ab50194b0e5182dbd095e094c1d6598ffec9adf9632e07062134c1284a52097"} Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.912400 4796 scope.go:117] "RemoveContainer" containerID="5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.926709 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92w27\" (UniqueName: \"kubernetes.io/projected/8b7db9a0-58a5-496b-826c-6e64920151b8-kube-api-access-92w27\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.926748 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.926762 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b7db9a0-58a5-496b-826c-6e64920151b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.949604 4796 scope.go:117] "RemoveContainer" containerID="852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.952461 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m75r4"] Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.955251 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m75r4"] Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.975356 4796 scope.go:117] "RemoveContainer" containerID="5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d" Jan 27 06:50:05 crc kubenswrapper[4796]: E0127 06:50:05.975879 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d\": container with ID starting with 5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d not found: ID does not exist" containerID="5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.975963 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d"} err="failed to get container status \"5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d\": rpc error: code = NotFound desc = could not find container \"5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d\": container with ID starting with 5dde79d0668f5b805e9ba42970a6da9798917c62835e91cc94a0e116d3a6266d not found: ID does not exist" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.976000 4796 scope.go:117] "RemoveContainer" containerID="5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a" Jan 27 06:50:05 crc kubenswrapper[4796]: E0127 06:50:05.976335 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a\": container with ID starting with 5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a not found: ID does not exist" containerID="5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.976371 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a"} err="failed to get container status \"5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a\": rpc error: code = NotFound desc = could not find container \"5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a\": container with ID starting with 5cb503fe64adbd741c6561f3039e585fb914464b36e9f87078f40af3bf029e0a not found: ID does not exist" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.976393 4796 scope.go:117] "RemoveContainer" containerID="852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad" Jan 27 06:50:05 crc kubenswrapper[4796]: E0127 06:50:05.976745 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad\": container with ID starting with 852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad not found: ID does not exist" containerID="852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad" Jan 27 06:50:05 crc kubenswrapper[4796]: I0127 06:50:05.976779 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad"} err="failed to get container status \"852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad\": rpc error: code = NotFound desc = could not find container \"852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad\": container with ID starting with 852acdcb0e03122744b10a1c55e1b622c1f697bf362136f34b8be69899d78aad not found: ID does not exist" Jan 27 06:50:06 crc kubenswrapper[4796]: I0127 06:50:06.324343 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:50:06 crc kubenswrapper[4796]: I0127 06:50:06.324710 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:50:06 crc kubenswrapper[4796]: I0127 06:50:06.753736 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" path="/var/lib/kubelet/pods/8b7db9a0-58a5-496b-826c-6e64920151b8/volumes" Jan 27 06:50:07 crc kubenswrapper[4796]: I0127 06:50:07.366916 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vrm5z" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerName="registry-server" probeResult="failure" output=< Jan 27 06:50:07 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Jan 27 06:50:07 crc kubenswrapper[4796]: > Jan 27 06:50:07 crc kubenswrapper[4796]: I0127 06:50:07.482485 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bxmhd"] Jan 27 06:50:09 crc kubenswrapper[4796]: I0127 06:50:09.007498 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pn7g"] Jan 27 06:50:09 crc kubenswrapper[4796]: I0127 06:50:09.009169 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2pn7g" podUID="5397ad23-1136-4577-b821-4199914b8582" containerName="registry-server" containerID="cri-o://4c64caeb56ac1525a7a23fe5f8bd71165656c8bc814737ab699ed79715c96fdc" gracePeriod=2 Jan 27 06:50:09 crc kubenswrapper[4796]: I0127 06:50:09.921805 4796 generic.go:334] "Generic (PLEG): container finished" podID="5397ad23-1136-4577-b821-4199914b8582" containerID="4c64caeb56ac1525a7a23fe5f8bd71165656c8bc814737ab699ed79715c96fdc" exitCode=0 Jan 27 06:50:09 crc kubenswrapper[4796]: I0127 06:50:09.921872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pn7g" event={"ID":"5397ad23-1136-4577-b821-4199914b8582","Type":"ContainerDied","Data":"4c64caeb56ac1525a7a23fe5f8bd71165656c8bc814737ab699ed79715c96fdc"} Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.122490 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.282022 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-catalog-content\") pod \"5397ad23-1136-4577-b821-4199914b8582\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.282189 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-utilities\") pod \"5397ad23-1136-4577-b821-4199914b8582\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.282390 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47d6z\" (UniqueName: \"kubernetes.io/projected/5397ad23-1136-4577-b821-4199914b8582-kube-api-access-47d6z\") pod \"5397ad23-1136-4577-b821-4199914b8582\" (UID: \"5397ad23-1136-4577-b821-4199914b8582\") " Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.283680 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-utilities" (OuterVolumeSpecName: "utilities") pod "5397ad23-1136-4577-b821-4199914b8582" (UID: "5397ad23-1136-4577-b821-4199914b8582"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.292489 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5397ad23-1136-4577-b821-4199914b8582-kube-api-access-47d6z" (OuterVolumeSpecName: "kube-api-access-47d6z") pod "5397ad23-1136-4577-b821-4199914b8582" (UID: "5397ad23-1136-4577-b821-4199914b8582"). InnerVolumeSpecName "kube-api-access-47d6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.301111 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5397ad23-1136-4577-b821-4199914b8582" (UID: "5397ad23-1136-4577-b821-4199914b8582"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.383936 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47d6z\" (UniqueName: \"kubernetes.io/projected/5397ad23-1136-4577-b821-4199914b8582-kube-api-access-47d6z\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.383978 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.383988 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5397ad23-1136-4577-b821-4199914b8582-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.932933 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2pn7g" event={"ID":"5397ad23-1136-4577-b821-4199914b8582","Type":"ContainerDied","Data":"173ce4d8662014d0c78dfbbfa004d9f03a25f6092be0b2e99a8665d9ea0e969f"} Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.933016 4796 scope.go:117] "RemoveContainer" containerID="4c64caeb56ac1525a7a23fe5f8bd71165656c8bc814737ab699ed79715c96fdc" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.933032 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2pn7g" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.957307 4796 scope.go:117] "RemoveContainer" containerID="d553bdb47e3bbb9da2ec6f637adabd2e9da600d8a650c1b6adb854304c95c715" Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.959711 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pn7g"] Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.962963 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2pn7g"] Jan 27 06:50:10 crc kubenswrapper[4796]: I0127 06:50:10.978165 4796 scope.go:117] "RemoveContainer" containerID="b010a2a047406e03fc00ee3e681aee8bd04ca70cbb8e8134c3eccd326d3b11e0" Jan 27 06:50:11 crc kubenswrapper[4796]: I0127 06:50:11.819754 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5ccbdccf-jmflt"] Jan 27 06:50:11 crc kubenswrapper[4796]: I0127 06:50:11.820006 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" podUID="47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" containerName="controller-manager" containerID="cri-o://0d1c5f795317c1a7df695e3f5d16ded69c5c5bd68ebc1f4d72f3452496dffd61" gracePeriod=30 Jan 27 06:50:11 crc kubenswrapper[4796]: I0127 06:50:11.922524 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9"] Jan 27 06:50:11 crc kubenswrapper[4796]: I0127 06:50:11.922756 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" podUID="052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" containerName="route-controller-manager" containerID="cri-o://9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162" gracePeriod=30 Jan 27 06:50:11 crc kubenswrapper[4796]: I0127 06:50:11.945987 4796 generic.go:334] "Generic (PLEG): container finished" podID="47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" containerID="0d1c5f795317c1a7df695e3f5d16ded69c5c5bd68ebc1f4d72f3452496dffd61" exitCode=0 Jan 27 06:50:11 crc kubenswrapper[4796]: I0127 06:50:11.946194 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" event={"ID":"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60","Type":"ContainerDied","Data":"0d1c5f795317c1a7df695e3f5d16ded69c5c5bd68ebc1f4d72f3452496dffd61"} Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.361362 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.365623 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.516286 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-client-ca\") pod \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.516347 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-config\") pod \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.516398 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc7lw\" (UniqueName: \"kubernetes.io/projected/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-kube-api-access-lc7lw\") pod \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.516432 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-proxy-ca-bundles\") pod \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.516482 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-serving-cert\") pod \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\" (UID: \"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95\") " Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.516502 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-client-ca\") pod \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.516589 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgbhh\" (UniqueName: \"kubernetes.io/projected/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-kube-api-access-xgbhh\") pod \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.516615 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-config\") pod \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.516649 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-serving-cert\") pod \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\" (UID: \"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60\") " Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.517558 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-client-ca" (OuterVolumeSpecName: "client-ca") pod "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" (UID: "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.517587 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-config" (OuterVolumeSpecName: "config") pod "052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" (UID: "052d19d2-cd0c-4ddb-bc61-b5c40e0fba95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.517614 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" (UID: "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.517899 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-config" (OuterVolumeSpecName: "config") pod "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" (UID: "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.518054 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-client-ca" (OuterVolumeSpecName: "client-ca") pod "052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" (UID: "052d19d2-cd0c-4ddb-bc61-b5c40e0fba95"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.521862 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-kube-api-access-lc7lw" (OuterVolumeSpecName: "kube-api-access-lc7lw") pod "052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" (UID: "052d19d2-cd0c-4ddb-bc61-b5c40e0fba95"). InnerVolumeSpecName "kube-api-access-lc7lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.523260 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" (UID: "052d19d2-cd0c-4ddb-bc61-b5c40e0fba95"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.523524 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" (UID: "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.524807 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-kube-api-access-xgbhh" (OuterVolumeSpecName: "kube-api-access-xgbhh") pod "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" (UID: "47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60"). InnerVolumeSpecName "kube-api-access-xgbhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.618499 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.618562 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc7lw\" (UniqueName: \"kubernetes.io/projected/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-kube-api-access-lc7lw\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.618579 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.618590 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.618601 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.618639 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgbhh\" (UniqueName: \"kubernetes.io/projected/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-kube-api-access-xgbhh\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.618651 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.618663 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.618674 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.753694 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5397ad23-1136-4577-b821-4199914b8582" path="/var/lib/kubelet/pods/5397ad23-1136-4577-b821-4199914b8582/volumes" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.958129 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.958133 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" event={"ID":"47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60","Type":"ContainerDied","Data":"95d3f0be3f81568ddc2dff51115f91b35c7ce374e3e5cbe5e0c81fd281d34c91"} Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.958358 4796 scope.go:117] "RemoveContainer" containerID="0d1c5f795317c1a7df695e3f5d16ded69c5c5bd68ebc1f4d72f3452496dffd61" Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.962592 4796 generic.go:334] "Generic (PLEG): container finished" podID="052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" containerID="9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162" exitCode=0 Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.962659 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" event={"ID":"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95","Type":"ContainerDied","Data":"9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162"} Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.962692 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" event={"ID":"052d19d2-cd0c-4ddb-bc61-b5c40e0fba95","Type":"ContainerDied","Data":"e97b97ed32bf998da5a29d3a8741adc765a94ee0a4961b95faf4ab6359e10fe9"} Jan 27 06:50:12 crc kubenswrapper[4796]: I0127 06:50:12.962775 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.004318 4796 scope.go:117] "RemoveContainer" containerID="9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.005583 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5ccbdccf-jmflt"] Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.029782 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5ccbdccf-jmflt"] Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.042830 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27"] Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043147 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" containerName="controller-manager" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043167 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" containerName="controller-manager" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043186 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerName="extract-content" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043198 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerName="extract-content" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043211 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerName="extract-utilities" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043221 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerName="extract-utilities" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043233 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5397ad23-1136-4577-b821-4199914b8582" containerName="extract-utilities" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043244 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5397ad23-1136-4577-b821-4199914b8582" containerName="extract-utilities" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043265 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerName="registry-server" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043274 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerName="registry-server" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043291 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" containerName="route-controller-manager" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043301 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" containerName="route-controller-manager" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043314 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerName="registry-server" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043324 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerName="registry-server" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043339 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerName="extract-utilities" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043349 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerName="extract-utilities" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043379 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5397ad23-1136-4577-b821-4199914b8582" containerName="extract-content" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043390 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5397ad23-1136-4577-b821-4199914b8582" containerName="extract-content" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043404 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerName="extract-content" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043413 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerName="extract-content" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.043429 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5397ad23-1136-4577-b821-4199914b8582" containerName="registry-server" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043439 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5397ad23-1136-4577-b821-4199914b8582" containerName="registry-server" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043601 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5397ad23-1136-4577-b821-4199914b8582" containerName="registry-server" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043622 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b7db9a0-58a5-496b-826c-6e64920151b8" containerName="registry-server" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043634 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a16a514-cf37-4eb5-a775-6dc2573704cf" containerName="registry-server" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043647 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" containerName="route-controller-manager" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.043660 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" containerName="controller-manager" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.044967 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.049321 4796 scope.go:117] "RemoveContainer" containerID="9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.049364 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd"] Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.050202 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.050373 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.050380 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.050459 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.050737 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.051313 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.050808 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:50:13 crc kubenswrapper[4796]: E0127 06:50:13.051853 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162\": container with ID starting with 9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162 not found: ID does not exist" containerID="9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.051910 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162"} err="failed to get container status \"9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162\": rpc error: code = NotFound desc = could not find container \"9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162\": container with ID starting with 9f3051e0f3fb5461316bde69b05ffdf88b9fd15b9b5e09363fe60c9facd13162 not found: ID does not exist" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.055067 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.056197 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.056505 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.056861 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.057026 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.057068 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9"] Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.060928 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.062520 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79c58b57df-5s6q9"] Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.070146 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27"] Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.073458 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.074226 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd"] Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.227867 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-client-ca\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.227952 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-proxy-ca-bundles\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.227984 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29jg\" (UniqueName: \"kubernetes.io/projected/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-kube-api-access-m29jg\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.228019 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lzr\" (UniqueName: \"kubernetes.io/projected/4ac53e96-00de-4722-97eb-7137ef4431da-kube-api-access-k9lzr\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.228051 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac53e96-00de-4722-97eb-7137ef4431da-serving-cert\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.228253 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-config\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.228505 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-client-ca\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.228611 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-serving-cert\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.228666 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-config\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.322599 4796 patch_prober.go:28] interesting pod/controller-manager-f5ccbdccf-jmflt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.322672 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f5ccbdccf-jmflt" podUID="47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.329575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac53e96-00de-4722-97eb-7137ef4431da-serving-cert\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.329618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-config\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.329659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-client-ca\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.329675 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-serving-cert\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.329692 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-config\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.329714 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-client-ca\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.329742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-proxy-ca-bundles\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.329758 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m29jg\" (UniqueName: \"kubernetes.io/projected/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-kube-api-access-m29jg\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.329784 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9lzr\" (UniqueName: \"kubernetes.io/projected/4ac53e96-00de-4722-97eb-7137ef4431da-kube-api-access-k9lzr\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.331250 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-client-ca\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.331726 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-client-ca\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.331942 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-config\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.332636 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-config\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.332707 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-proxy-ca-bundles\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.342173 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-serving-cert\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.344214 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29jg\" (UniqueName: \"kubernetes.io/projected/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-kube-api-access-m29jg\") pod \"controller-manager-7bc8d6f54d-r5zzd\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.344271 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac53e96-00de-4722-97eb-7137ef4431da-serving-cert\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.345939 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9lzr\" (UniqueName: \"kubernetes.io/projected/4ac53e96-00de-4722-97eb-7137ef4431da-kube-api-access-k9lzr\") pod \"route-controller-manager-79fd6c9864-wxp27\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.377379 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.388641 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.654562 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27"] Jan 27 06:50:13 crc kubenswrapper[4796]: W0127 06:50:13.662677 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac53e96_00de_4722_97eb_7137ef4431da.slice/crio-701acdf24f0d41c7ce05165601e880a234f946016ba96730eac64591d5923bd0 WatchSource:0}: Error finding container 701acdf24f0d41c7ce05165601e880a234f946016ba96730eac64591d5923bd0: Status 404 returned error can't find the container with id 701acdf24f0d41c7ce05165601e880a234f946016ba96730eac64591d5923bd0 Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.790890 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd"] Jan 27 06:50:13 crc kubenswrapper[4796]: W0127 06:50:13.799822 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d02aeed_21ec_4659_80a0_b4e3b0e9bc25.slice/crio-c8437a87f12a8b349dbfcfc9bfdd0cfb89bb38fc6583da76af5c81a90cac9293 WatchSource:0}: Error finding container c8437a87f12a8b349dbfcfc9bfdd0cfb89bb38fc6583da76af5c81a90cac9293: Status 404 returned error can't find the container with id c8437a87f12a8b349dbfcfc9bfdd0cfb89bb38fc6583da76af5c81a90cac9293 Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.973575 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" event={"ID":"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25","Type":"ContainerStarted","Data":"b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17"} Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.974154 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.974205 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" event={"ID":"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25","Type":"ContainerStarted","Data":"c8437a87f12a8b349dbfcfc9bfdd0cfb89bb38fc6583da76af5c81a90cac9293"} Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.975562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" event={"ID":"4ac53e96-00de-4722-97eb-7137ef4431da","Type":"ContainerStarted","Data":"b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e"} Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.975612 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" event={"ID":"4ac53e96-00de-4722-97eb-7137ef4431da","Type":"ContainerStarted","Data":"701acdf24f0d41c7ce05165601e880a234f946016ba96730eac64591d5923bd0"} Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.976889 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.977085 4796 patch_prober.go:28] interesting pod/controller-manager-7bc8d6f54d-r5zzd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.977143 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" podUID="8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 27 06:50:13 crc kubenswrapper[4796]: I0127 06:50:13.999670 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" podStartSLOduration=2.999637322 podStartE2EDuration="2.999637322s" podCreationTimestamp="2026-01-27 06:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:13.994269442 +0000 UTC m=+235.101236809" watchObservedRunningTime="2026-01-27 06:50:13.999637322 +0000 UTC m=+235.106604689" Jan 27 06:50:14 crc kubenswrapper[4796]: I0127 06:50:14.023698 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" podStartSLOduration=3.023680499 podStartE2EDuration="3.023680499s" podCreationTimestamp="2026-01-27 06:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:14.021320067 +0000 UTC m=+235.128287404" watchObservedRunningTime="2026-01-27 06:50:14.023680499 +0000 UTC m=+235.130647826" Jan 27 06:50:14 crc kubenswrapper[4796]: I0127 06:50:14.769148 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052d19d2-cd0c-4ddb-bc61-b5c40e0fba95" path="/var/lib/kubelet/pods/052d19d2-cd0c-4ddb-bc61-b5c40e0fba95/volumes" Jan 27 06:50:14 crc kubenswrapper[4796]: I0127 06:50:14.770265 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60" path="/var/lib/kubelet/pods/47e4f49a-5b29-48c9-9ca6-6b60bd1e1e60/volumes" Jan 27 06:50:14 crc kubenswrapper[4796]: I0127 06:50:14.773829 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:50:14 crc kubenswrapper[4796]: I0127 06:50:14.989396 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:50:16 crc kubenswrapper[4796]: I0127 06:50:16.396828 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:50:16 crc kubenswrapper[4796]: I0127 06:50:16.448043 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.405254 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vrm5z"] Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.666013 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.667257 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.667423 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.667933 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f" gracePeriod=15 Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.668034 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5" gracePeriod=15 Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.667955 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1" gracePeriod=15 Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.668005 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb" gracePeriod=15 Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.667983 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b" gracePeriod=15 Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.668829 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:50:17 crc kubenswrapper[4796]: E0127 06:50:17.669143 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669172 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:50:17 crc kubenswrapper[4796]: E0127 06:50:17.669188 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669199 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 06:50:17 crc kubenswrapper[4796]: E0127 06:50:17.669213 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669225 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 06:50:17 crc kubenswrapper[4796]: E0127 06:50:17.669239 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669252 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 06:50:17 crc kubenswrapper[4796]: E0127 06:50:17.669267 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669278 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 27 06:50:17 crc kubenswrapper[4796]: E0127 06:50:17.669299 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669310 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:50:17 crc kubenswrapper[4796]: E0127 06:50:17.669325 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669335 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:50:17 crc kubenswrapper[4796]: E0127 06:50:17.669348 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669357 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669519 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669559 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669575 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669586 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669603 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669620 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.669634 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.691180 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.691274 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.691321 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.691389 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.691462 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.691509 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.691564 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.691606 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792343 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792451 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792501 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792520 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792562 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792615 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792656 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792732 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792764 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792799 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.792936 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.793005 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.793063 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:17 crc kubenswrapper[4796]: I0127 06:50:17.793947 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.004162 4796 generic.go:334] "Generic (PLEG): container finished" podID="c6856fd8-b28c-489f-a672-e05777061280" containerID="708c66deac3d4231f36f894cb4c8673bc8111e4cd56f285324f7273e428672c7" exitCode=0 Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.004227 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c6856fd8-b28c-489f-a672-e05777061280","Type":"ContainerDied","Data":"708c66deac3d4231f36f894cb4c8673bc8111e4cd56f285324f7273e428672c7"} Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.005011 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.005298 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.006497 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.007671 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.008148 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f" exitCode=0 Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.008167 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb" exitCode=0 Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.008174 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1" exitCode=0 Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.008182 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b" exitCode=2 Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.008326 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vrm5z" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerName="registry-server" containerID="cri-o://2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba" gracePeriod=2 Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.008602 4796 scope.go:117] "RemoveContainer" containerID="46491279d221c158344a02074864289a8073826d8d72f0592c7e240c036c562c" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.008878 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:18 crc kubenswrapper[4796]: E0127 06:50:18.008899 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-vrm5z.188e83cc83a40ad6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-vrm5z,UID:e6f0ca0a-ff9c-4420-92bb-517ca68b906c,APIVersion:v1,ResourceVersion:28423,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:50:18.00831663 +0000 UTC m=+239.115283957,LastTimestamp:2026-01-27 06:50:18.00831663 +0000 UTC m=+239.115283957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.009053 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.009318 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.450854 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.452283 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.452843 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.453401 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.602121 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-catalog-content\") pod \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.602290 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-utilities\") pod \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.602343 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn6z8\" (UniqueName: \"kubernetes.io/projected/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-kube-api-access-xn6z8\") pod \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\" (UID: \"e6f0ca0a-ff9c-4420-92bb-517ca68b906c\") " Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.604005 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-utilities" (OuterVolumeSpecName: "utilities") pod "e6f0ca0a-ff9c-4420-92bb-517ca68b906c" (UID: "e6f0ca0a-ff9c-4420-92bb-517ca68b906c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.610634 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-kube-api-access-xn6z8" (OuterVolumeSpecName: "kube-api-access-xn6z8") pod "e6f0ca0a-ff9c-4420-92bb-517ca68b906c" (UID: "e6f0ca0a-ff9c-4420-92bb-517ca68b906c"). InnerVolumeSpecName "kube-api-access-xn6z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.703505 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.703555 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn6z8\" (UniqueName: \"kubernetes.io/projected/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-kube-api-access-xn6z8\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.736258 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6f0ca0a-ff9c-4420-92bb-517ca68b906c" (UID: "e6f0ca0a-ff9c-4420-92bb-517ca68b906c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:50:18 crc kubenswrapper[4796]: I0127 06:50:18.804858 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f0ca0a-ff9c-4420-92bb-517ca68b906c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.022185 4796 generic.go:334] "Generic (PLEG): container finished" podID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerID="2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba" exitCode=0 Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.022248 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vrm5z" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.022292 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrm5z" event={"ID":"e6f0ca0a-ff9c-4420-92bb-517ca68b906c","Type":"ContainerDied","Data":"2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba"} Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.022370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vrm5z" event={"ID":"e6f0ca0a-ff9c-4420-92bb-517ca68b906c","Type":"ContainerDied","Data":"2c189a000e3102a41ee1babce31bff4b4e0b9101fee81950c3584f75f55e29f3"} Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.022401 4796 scope.go:117] "RemoveContainer" containerID="2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.023308 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.023762 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.028578 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.029376 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.029760 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.040781 4796 scope.go:117] "RemoveContainer" containerID="51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.062845 4796 scope.go:117] "RemoveContainer" containerID="8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.096163 4796 scope.go:117] "RemoveContainer" containerID="2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba" Jan 27 06:50:19 crc kubenswrapper[4796]: E0127 06:50:19.096708 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba\": container with ID starting with 2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba not found: ID does not exist" containerID="2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.096761 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba"} err="failed to get container status \"2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba\": rpc error: code = NotFound desc = could not find container \"2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba\": container with ID starting with 2990b97a6732dc2613188226a8f4ae1ff95547205a8f2c3d192d7550552c0cba not found: ID does not exist" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.096795 4796 scope.go:117] "RemoveContainer" containerID="51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89" Jan 27 06:50:19 crc kubenswrapper[4796]: E0127 06:50:19.097339 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89\": container with ID starting with 51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89 not found: ID does not exist" containerID="51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.097407 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89"} err="failed to get container status \"51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89\": rpc error: code = NotFound desc = could not find container \"51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89\": container with ID starting with 51071f9341cf9295a858e668021d6f842a54aa900d37a468932a3303be850e89 not found: ID does not exist" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.097456 4796 scope.go:117] "RemoveContainer" containerID="8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4" Jan 27 06:50:19 crc kubenswrapper[4796]: E0127 06:50:19.097910 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4\": container with ID starting with 8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4 not found: ID does not exist" containerID="8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.097970 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4"} err="failed to get container status \"8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4\": rpc error: code = NotFound desc = could not find container \"8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4\": container with ID starting with 8ccd29a9f5a03b136f02a9745478fd4d6bfa62e21a7fd7fa353c5ebeb0c710a4 not found: ID does not exist" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.414127 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.415694 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.416577 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.515520 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-var-lock\") pod \"c6856fd8-b28c-489f-a672-e05777061280\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.515648 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6856fd8-b28c-489f-a672-e05777061280-kube-api-access\") pod \"c6856fd8-b28c-489f-a672-e05777061280\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.515755 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-kubelet-dir\") pod \"c6856fd8-b28c-489f-a672-e05777061280\" (UID: \"c6856fd8-b28c-489f-a672-e05777061280\") " Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.515784 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-var-lock" (OuterVolumeSpecName: "var-lock") pod "c6856fd8-b28c-489f-a672-e05777061280" (UID: "c6856fd8-b28c-489f-a672-e05777061280"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.515868 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c6856fd8-b28c-489f-a672-e05777061280" (UID: "c6856fd8-b28c-489f-a672-e05777061280"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.516286 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.516328 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6856fd8-b28c-489f-a672-e05777061280-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.521852 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6856fd8-b28c-489f-a672-e05777061280-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c6856fd8-b28c-489f-a672-e05777061280" (UID: "c6856fd8-b28c-489f-a672-e05777061280"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:19 crc kubenswrapper[4796]: I0127 06:50:19.617707 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6856fd8-b28c-489f-a672-e05777061280-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.038351 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.039274 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5" exitCode=0 Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.041211 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c6856fd8-b28c-489f-a672-e05777061280","Type":"ContainerDied","Data":"85c359559b64b24d78a3bcfb96a84340b55079c537d4c9f3306687668b3b9c0e"} Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.041259 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.041264 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c359559b64b24d78a3bcfb96a84340b55079c537d4c9f3306687668b3b9c0e" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.078606 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.078908 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.575374 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.576453 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.577269 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.579068 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.580706 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.732588 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.733451 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.733520 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.732697 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.733742 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.733783 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.749914 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.750226 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.750431 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.756162 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.835509 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.835559 4796 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:20 crc kubenswrapper[4796]: I0127 06:50:20.835571 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.053030 4796 scope.go:117] "RemoveContainer" containerID="2231382e49c9132f7858a6b3c2ca5d90f3667f27cbbd7f785f7dac10eba7a78f" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.053215 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.054526 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.056392 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.056822 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.057654 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.058322 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.058824 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.075406 4796 scope.go:117] "RemoveContainer" containerID="a3fb160d4be08e9e7483bb0f541d23f824096436847d17e0790dc83d47e494eb" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.096840 4796 scope.go:117] "RemoveContainer" containerID="771856c28e395647c30cec4e8a6b84a148d57192ff5d4b76152762ea2a9b97e1" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.116747 4796 scope.go:117] "RemoveContainer" containerID="42241f79a0d44c03c799ecddba09aeb450fa8013aa370d3d81f91ecb08c25d8b" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.138435 4796 scope.go:117] "RemoveContainer" containerID="a3c0b75840162e9c18c82db765a87b243d37e992db464c75aef2150984a436e5" Jan 27 06:50:21 crc kubenswrapper[4796]: I0127 06:50:21.172290 4796 scope.go:117] "RemoveContainer" containerID="50c8919a061c912d4f2bb9685cfc2d157a07d18e2a1eb0c2cd9faf06996a0994" Jan 27 06:50:22 crc kubenswrapper[4796]: E0127 06:50:22.704075 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:22 crc kubenswrapper[4796]: I0127 06:50:22.704770 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:23 crc kubenswrapper[4796]: I0127 06:50:23.077193 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a2812cfb9691a7c44bd0ef1c191d22392e02dcf55d2bcf52aa142681ac496861"} Jan 27 06:50:23 crc kubenswrapper[4796]: I0127 06:50:23.077718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"04702af0e546188eb4a11a9d65c97b52788092e25118b71c06f73ed8f155b9f0"} Jan 27 06:50:23 crc kubenswrapper[4796]: I0127 06:50:23.078320 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:23 crc kubenswrapper[4796]: E0127 06:50:23.078351 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:50:23 crc kubenswrapper[4796]: I0127 06:50:23.078703 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:23 crc kubenswrapper[4796]: E0127 06:50:23.308993 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c\": RecentStats: unable to find data in memory cache]" Jan 27 06:50:25 crc kubenswrapper[4796]: E0127 06:50:25.835437 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.166:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-operators-vrm5z.188e83cc83a40ad6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-vrm5z,UID:e6f0ca0a-ff9c-4420-92bb-517ca68b906c,APIVersion:v1,ResourceVersion:28423,FieldPath:spec.containers{registry-server},},Reason:Killing,Message:Stopping container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-27 06:50:18.00831663 +0000 UTC m=+239.115283957,LastTimestamp:2026-01-27 06:50:18.00831663 +0000 UTC m=+239.115283957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 27 06:50:27 crc kubenswrapper[4796]: E0127 06:50:27.004392 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:27 crc kubenswrapper[4796]: E0127 06:50:27.005281 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:27 crc kubenswrapper[4796]: E0127 06:50:27.005728 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:27 crc kubenswrapper[4796]: E0127 06:50:27.006197 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:27 crc kubenswrapper[4796]: E0127 06:50:27.006579 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:27 crc kubenswrapper[4796]: I0127 06:50:27.006623 4796 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 27 06:50:27 crc kubenswrapper[4796]: E0127 06:50:27.006969 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="200ms" Jan 27 06:50:27 crc kubenswrapper[4796]: E0127 06:50:27.208024 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="400ms" Jan 27 06:50:27 crc kubenswrapper[4796]: E0127 06:50:27.608616 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="800ms" Jan 27 06:50:28 crc kubenswrapper[4796]: E0127 06:50:28.409714 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="1.6s" Jan 27 06:50:29 crc kubenswrapper[4796]: I0127 06:50:29.746314 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:29 crc kubenswrapper[4796]: I0127 06:50:29.747776 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:29 crc kubenswrapper[4796]: I0127 06:50:29.748285 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:29 crc kubenswrapper[4796]: I0127 06:50:29.775532 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:29 crc kubenswrapper[4796]: I0127 06:50:29.775617 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:29 crc kubenswrapper[4796]: E0127 06:50:29.776235 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:29 crc kubenswrapper[4796]: I0127 06:50:29.776969 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:30 crc kubenswrapper[4796]: E0127 06:50:30.011785 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.166:6443: connect: connection refused" interval="3.2s" Jan 27 06:50:30 crc kubenswrapper[4796]: I0127 06:50:30.124977 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b60ccaa4f4c01f482873c692337ebafccf27fb3d526127e1c90a6999021d85ed"} Jan 27 06:50:30 crc kubenswrapper[4796]: I0127 06:50:30.125041 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2631832e67007849a434e14546a3eed11002a2478af3efca25d20b27c9a2386f"} Jan 27 06:50:30 crc kubenswrapper[4796]: I0127 06:50:30.125331 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:30 crc kubenswrapper[4796]: I0127 06:50:30.125359 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:30 crc kubenswrapper[4796]: E0127 06:50:30.126444 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:30 crc kubenswrapper[4796]: I0127 06:50:30.127375 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:30 crc kubenswrapper[4796]: I0127 06:50:30.127946 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:30 crc kubenswrapper[4796]: I0127 06:50:30.756276 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:30 crc kubenswrapper[4796]: I0127 06:50:30.757005 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:30 crc kubenswrapper[4796]: I0127 06:50:30.757451 4796 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:30 crc kubenswrapper[4796]: E0127 06:50:30.773137 4796 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" volumeName="registry-storage" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.135174 4796 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b60ccaa4f4c01f482873c692337ebafccf27fb3d526127e1c90a6999021d85ed" exitCode=0 Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.135342 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b60ccaa4f4c01f482873c692337ebafccf27fb3d526127e1c90a6999021d85ed"} Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.135785 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.135826 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:31 crc kubenswrapper[4796]: E0127 06:50:31.137346 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.137716 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.138180 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.138518 4796 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.143378 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.143459 4796 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b" exitCode=1 Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.143504 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b"} Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.144172 4796 scope.go:117] "RemoveContainer" containerID="48948084ba0e80ec0b29617d1cec75a0d5e625e2343ee2d2cb3485bfbc526e3b" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.144694 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.145165 4796 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.145848 4796 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:31 crc kubenswrapper[4796]: I0127 06:50:31.146407 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:32 crc kubenswrapper[4796]: I0127 06:50:32.158974 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 27 06:50:32 crc kubenswrapper[4796]: I0127 06:50:32.159518 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f814132ddc8b99cab8c4139c992491671114347062aa62a55f441877a55a054c"} Jan 27 06:50:32 crc kubenswrapper[4796]: I0127 06:50:32.161247 4796 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:32 crc kubenswrapper[4796]: I0127 06:50:32.161771 4796 status_manager.go:851] "Failed to get status for pod" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" pod="openshift-marketplace/redhat-operators-vrm5z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-vrm5z\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:32 crc kubenswrapper[4796]: I0127 06:50:32.162293 4796 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:32 crc kubenswrapper[4796]: I0127 06:50:32.162894 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"41b466e64405dcbf9f2010fd8e1415ede2988cf974b7ddfc7fe25de7f8be9220"} Jan 27 06:50:32 crc kubenswrapper[4796]: I0127 06:50:32.162842 4796 status_manager.go:851] "Failed to get status for pod" podUID="c6856fd8-b28c-489f-a672-e05777061280" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.166:6443: connect: connection refused" Jan 27 06:50:32 crc kubenswrapper[4796]: I0127 06:50:32.511568 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" podUID="56d7f37b-05cc-4a36-b844-423465e79e8e" containerName="oauth-openshift" containerID="cri-o://c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac" gracePeriod=15 Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.064414 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.173692 4796 generic.go:334] "Generic (PLEG): container finished" podID="56d7f37b-05cc-4a36-b844-423465e79e8e" containerID="c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac" exitCode=0 Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.173759 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.173754 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" event={"ID":"56d7f37b-05cc-4a36-b844-423465e79e8e","Type":"ContainerDied","Data":"c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac"} Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.173966 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bxmhd" event={"ID":"56d7f37b-05cc-4a36-b844-423465e79e8e","Type":"ContainerDied","Data":"2bb5a8eef1d0564aad5a180044597f0d9cee7b3fc08c00fe7a8fb7259f634e14"} Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.174015 4796 scope.go:117] "RemoveContainer" containerID="c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.178701 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dd0131ec2f4ae376a4fe89dd4b4d8a0a0d7bebfd1153632d9e61adf17717226b"} Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.178733 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"41a3d2c62723d9992fa135f5a523c6baf1fa2d3c3f40f0246c43d6d789865cde"} Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.178746 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4b39aa867f4308a14217c8133e429777ef744482aed9732ecd26c8ab076ff589"} Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.201966 4796 scope.go:117] "RemoveContainer" containerID="c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac" Jan 27 06:50:33 crc kubenswrapper[4796]: E0127 06:50:33.202376 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac\": container with ID starting with c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac not found: ID does not exist" containerID="c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.202416 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac"} err="failed to get container status \"c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac\": rpc error: code = NotFound desc = could not find container \"c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac\": container with ID starting with c0a65fa7ca58d9f5303380ad48d67655e46225184fa4a2817a216c13ae6a35ac not found: ID does not exist" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.209867 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghgz4\" (UniqueName: \"kubernetes.io/projected/56d7f37b-05cc-4a36-b844-423465e79e8e-kube-api-access-ghgz4\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.209932 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-error\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.209959 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-dir\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.209978 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-login\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.209996 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-session\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210018 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-cliconfig\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210041 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-policies\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210085 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-idp-0-file-data\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210105 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-serving-cert\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210133 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-ocp-branding-template\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210155 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-trusted-ca-bundle\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210174 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-provider-selection\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210193 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-router-certs\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210213 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-service-ca\") pod \"56d7f37b-05cc-4a36-b844-423465e79e8e\" (UID: \"56d7f37b-05cc-4a36-b844-423465e79e8e\") " Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210204 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.210423 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.211198 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.211245 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.212527 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.212525 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.217453 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.217599 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d7f37b-05cc-4a36-b844-423465e79e8e-kube-api-access-ghgz4" (OuterVolumeSpecName: "kube-api-access-ghgz4") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "kube-api-access-ghgz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.217608 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.217866 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.218245 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.218652 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.218726 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.219033 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.219224 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "56d7f37b-05cc-4a36-b844-423465e79e8e" (UID: "56d7f37b-05cc-4a36-b844-423465e79e8e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.311965 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312036 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312063 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312083 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312103 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312122 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghgz4\" (UniqueName: \"kubernetes.io/projected/56d7f37b-05cc-4a36-b844-423465e79e8e-kube-api-access-ghgz4\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312141 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312160 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312178 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312195 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312213 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56d7f37b-05cc-4a36-b844-423465e79e8e-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312235 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.312255 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56d7f37b-05cc-4a36-b844-423465e79e8e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:50:33 crc kubenswrapper[4796]: E0127 06:50:33.469144 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c\": RecentStats: unable to find data in memory cache]" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.585029 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:50:33 crc kubenswrapper[4796]: I0127 06:50:33.589090 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:50:34 crc kubenswrapper[4796]: I0127 06:50:34.214368 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9e67dabd366cb9fc6bb5ac7c952780ef3bab93670313fc560adcf8191d915359"} Jan 27 06:50:34 crc kubenswrapper[4796]: I0127 06:50:34.214593 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:34 crc kubenswrapper[4796]: I0127 06:50:34.214620 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:34 crc kubenswrapper[4796]: I0127 06:50:34.214640 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:34 crc kubenswrapper[4796]: I0127 06:50:34.217441 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:50:34 crc kubenswrapper[4796]: I0127 06:50:34.777647 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:34 crc kubenswrapper[4796]: I0127 06:50:34.778336 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:34 crc kubenswrapper[4796]: I0127 06:50:34.782452 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]log ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]etcd ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/generic-apiserver-start-informers ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/priority-and-fairness-filter ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-apiextensions-informers ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-apiextensions-controllers ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/crd-informer-synced ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-system-namespaces-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 27 06:50:34 crc kubenswrapper[4796]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/bootstrap-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/start-kube-aggregator-informers ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/apiservice-registration-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/apiservice-discovery-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]autoregister-completion ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/apiservice-openapi-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 27 06:50:34 crc kubenswrapper[4796]: livez check failed Jan 27 06:50:34 crc kubenswrapper[4796]: I0127 06:50:34.782568 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 27 06:50:39 crc kubenswrapper[4796]: I0127 06:50:39.229979 4796 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:39 crc kubenswrapper[4796]: I0127 06:50:39.785433 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:39 crc kubenswrapper[4796]: I0127 06:50:39.789721 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2fb2d2d4-049b-4a8a-866c-993fb10605a1" Jan 27 06:50:40 crc kubenswrapper[4796]: I0127 06:50:40.255105 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:40 crc kubenswrapper[4796]: I0127 06:50:40.255141 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:40 crc kubenswrapper[4796]: I0127 06:50:40.260793 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:40 crc kubenswrapper[4796]: I0127 06:50:40.762007 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2fb2d2d4-049b-4a8a-866c-993fb10605a1" Jan 27 06:50:41 crc kubenswrapper[4796]: I0127 06:50:41.261466 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:41 crc kubenswrapper[4796]: I0127 06:50:41.261516 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:41 crc kubenswrapper[4796]: I0127 06:50:41.265207 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2fb2d2d4-049b-4a8a-866c-993fb10605a1" Jan 27 06:50:42 crc kubenswrapper[4796]: I0127 06:50:42.267701 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:42 crc kubenswrapper[4796]: I0127 06:50:42.268139 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="41c97fb9-7f88-4adf-b9e5-a35ca143adad" Jan 27 06:50:42 crc kubenswrapper[4796]: I0127 06:50:42.270092 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2fb2d2d4-049b-4a8a-866c-993fb10605a1" Jan 27 06:50:43 crc kubenswrapper[4796]: E0127 06:50:43.655743 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache]" Jan 27 06:50:47 crc kubenswrapper[4796]: I0127 06:50:47.838163 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 27 06:50:48 crc kubenswrapper[4796]: I0127 06:50:48.280964 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:50:48 crc kubenswrapper[4796]: I0127 06:50:48.905910 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 27 06:50:49 crc kubenswrapper[4796]: I0127 06:50:49.218630 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 27 06:50:49 crc kubenswrapper[4796]: I0127 06:50:49.474991 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 27 06:50:49 crc kubenswrapper[4796]: I0127 06:50:49.560264 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 27 06:50:49 crc kubenswrapper[4796]: I0127 06:50:49.744371 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.056782 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.057529 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.198079 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.230081 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.311457 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.399960 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.536301 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.685179 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.731984 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.961015 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 27 06:50:50 crc kubenswrapper[4796]: I0127 06:50:50.983049 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.012274 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.135734 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.173835 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.182782 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bxmhd","openshift-marketplace/redhat-operators-vrm5z","openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.182885 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.189355 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.218031 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.21799978 podStartE2EDuration="12.21799978s" podCreationTimestamp="2026-01-27 06:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:51.208958114 +0000 UTC m=+272.315925491" watchObservedRunningTime="2026-01-27 06:50:51.21799978 +0000 UTC m=+272.324967147" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.230888 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.317145 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.344780 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.364360 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.484606 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.495571 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.545883 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.569649 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.621463 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.622823 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.646528 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.665206 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.736109 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.742832 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.786232 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.815776 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.869622 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 27 06:50:51 crc kubenswrapper[4796]: I0127 06:50:51.992874 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.201330 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.276168 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.341523 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.346356 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.582174 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.592133 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.663107 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.678344 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.762804 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d7f37b-05cc-4a36-b844-423465e79e8e" path="/var/lib/kubelet/pods/56d7f37b-05cc-4a36-b844-423465e79e8e/volumes" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.764042 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" path="/var/lib/kubelet/pods/e6f0ca0a-ff9c-4420-92bb-517ca68b906c/volumes" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.810123 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.825211 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.825219 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.833258 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:50:52 crc kubenswrapper[4796]: I0127 06:50:52.958708 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.033616 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.039182 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.076754 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.097504 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.105004 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.113379 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.255927 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.265418 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.482001 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.556004 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.556524 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.562829 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.651747 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.704061 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 27 06:50:53 crc kubenswrapper[4796]: E0127 06:50:53.793043 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache]" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.912949 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 27 06:50:53 crc kubenswrapper[4796]: I0127 06:50:53.967761 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.010056 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.028794 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.196930 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.243171 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.296818 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.388028 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.459194 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.504063 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.528279 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.567337 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.580366 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.878026 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.957395 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 27 06:50:54 crc kubenswrapper[4796]: I0127 06:50:54.965247 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.102886 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.144888 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.199602 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.228613 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.301602 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.392303 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.474706 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.561947 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.562564 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.579597 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.583440 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.610518 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.626180 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.668979 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.696081 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.697687 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.772135 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.777392 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.798867 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.918772 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 27 06:50:55 crc kubenswrapper[4796]: I0127 06:50:55.923384 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.045612 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.046596 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.062310 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.072471 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.105067 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.122816 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.145796 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.363419 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.370672 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.376810 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.387669 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.411034 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.582638 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.676690 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.733757 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.744107 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.746065 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.777836 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.794718 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.831497 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.877977 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994061 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6d584df96b-lrx6g"] Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994068 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 27 06:50:56 crc kubenswrapper[4796]: E0127 06:50:56.994339 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerName="extract-content" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994363 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerName="extract-content" Jan 27 06:50:56 crc kubenswrapper[4796]: E0127 06:50:56.994387 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d7f37b-05cc-4a36-b844-423465e79e8e" containerName="oauth-openshift" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994400 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d7f37b-05cc-4a36-b844-423465e79e8e" containerName="oauth-openshift" Jan 27 06:50:56 crc kubenswrapper[4796]: E0127 06:50:56.994420 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerName="extract-utilities" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994431 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerName="extract-utilities" Jan 27 06:50:56 crc kubenswrapper[4796]: E0127 06:50:56.994448 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6856fd8-b28c-489f-a672-e05777061280" containerName="installer" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994459 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6856fd8-b28c-489f-a672-e05777061280" containerName="installer" Jan 27 06:50:56 crc kubenswrapper[4796]: E0127 06:50:56.994478 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerName="registry-server" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994488 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerName="registry-server" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994672 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d7f37b-05cc-4a36-b844-423465e79e8e" containerName="oauth-openshift" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994696 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f0ca0a-ff9c-4420-92bb-517ca68b906c" containerName="registry-server" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.994720 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6856fd8-b28c-489f-a672-e05777061280" containerName="installer" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.995252 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.997531 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.997693 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.998371 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.998686 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 27 06:50:56 crc kubenswrapper[4796]: I0127 06:50:56.998940 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.000038 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.000237 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.000514 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.000517 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.000965 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.001190 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.001521 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.004808 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.013092 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d584df96b-lrx6g"] Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.015654 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.016188 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.017888 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.033508 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.050619 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.057782 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.085951 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086042 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086060 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086078 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086097 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086111 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-template-login\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086129 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-session\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086149 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086171 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-template-error\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086188 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-audit-policies\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086206 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njbdc\" (UniqueName: \"kubernetes.io/projected/029ac711-07ba-491f-a8e6-3be408b06a36-kube-api-access-njbdc\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.086243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/029ac711-07ba-491f-a8e6-3be408b06a36-audit-dir\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.152827 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.164466 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187192 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187231 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njbdc\" (UniqueName: \"kubernetes.io/projected/029ac711-07ba-491f-a8e6-3be408b06a36-kube-api-access-njbdc\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187258 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/029ac711-07ba-491f-a8e6-3be408b06a36-audit-dir\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187333 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187353 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187370 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187389 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-template-login\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187442 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-session\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187459 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187482 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-template-error\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.187499 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-audit-policies\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.188353 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/029ac711-07ba-491f-a8e6-3be408b06a36-audit-dir\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.188599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.189109 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-audit-policies\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.189529 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-service-ca\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.190006 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.194637 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.194721 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-session\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.194757 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-router-certs\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.194959 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.196301 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-template-error\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.196524 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.196965 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.197398 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/029ac711-07ba-491f-a8e6-3be408b06a36-v4-0-config-user-template-login\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.215426 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njbdc\" (UniqueName: \"kubernetes.io/projected/029ac711-07ba-491f-a8e6-3be408b06a36-kube-api-access-njbdc\") pod \"oauth-openshift-6d584df96b-lrx6g\" (UID: \"029ac711-07ba-491f-a8e6-3be408b06a36\") " pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.241312 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.322920 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.545724 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.593829 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.646389 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.662496 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.665145 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.748325 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.782369 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.792246 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.821687 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6d584df96b-lrx6g"] Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.861648 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.911554 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.913531 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.925279 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 27 06:50:57 crc kubenswrapper[4796]: I0127 06:50:57.978380 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.080483 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.114979 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.116570 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.121166 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.128144 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.156900 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.174486 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.265866 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.346225 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.364477 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" event={"ID":"029ac711-07ba-491f-a8e6-3be408b06a36","Type":"ContainerStarted","Data":"f160fdb11c4e47a33fcaba48c84fffa040111334151c0f27db5c0ae976b6d9b5"} Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.364550 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" event={"ID":"029ac711-07ba-491f-a8e6-3be408b06a36","Type":"ContainerStarted","Data":"a270f7dc9db606d3f93888ba444925331961918fb1977f43d08a597f9ba8995e"} Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.365761 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.398088 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" podStartSLOduration=51.398063137 podStartE2EDuration="51.398063137s" podCreationTimestamp="2026-01-27 06:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:50:58.395357644 +0000 UTC m=+279.502324991" watchObservedRunningTime="2026-01-27 06:50:58.398063137 +0000 UTC m=+279.505030494" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.468073 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6d584df96b-lrx6g" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.566628 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.574786 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.620512 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.635213 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.649763 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.729730 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.736823 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.826139 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.829275 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.855823 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.879032 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.902002 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.906018 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.943523 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 27 06:50:58 crc kubenswrapper[4796]: I0127 06:50:58.977171 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.018084 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.067757 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.141208 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.170868 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.217142 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.224867 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.373057 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.468346 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.485291 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.580887 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.606492 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.617290 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.782925 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.849268 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 27 06:50:59 crc kubenswrapper[4796]: I0127 06:50:59.997650 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.336895 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.339759 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.373398 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.452071 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.480710 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.541021 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.784044 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.798579 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.852955 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.903091 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 27 06:51:00 crc kubenswrapper[4796]: I0127 06:51:00.942632 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.148362 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.231973 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.277513 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.308462 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.308734 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a2812cfb9691a7c44bd0ef1c191d22392e02dcf55d2bcf52aa142681ac496861" gracePeriod=5 Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.462527 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.592520 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.601902 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.614885 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.633528 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.655063 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.688081 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.690782 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.804008 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 27 06:51:01 crc kubenswrapper[4796]: I0127 06:51:01.883277 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.005741 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.066642 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.076335 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.105987 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.116486 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.180393 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.249737 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.326036 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.376442 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.481758 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.606379 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.674958 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.861721 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 27 06:51:02 crc kubenswrapper[4796]: I0127 06:51:02.871441 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 27 06:51:03 crc kubenswrapper[4796]: I0127 06:51:03.063074 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 27 06:51:03 crc kubenswrapper[4796]: I0127 06:51:03.303029 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 27 06:51:03 crc kubenswrapper[4796]: I0127 06:51:03.580433 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 27 06:51:03 crc kubenswrapper[4796]: I0127 06:51:03.658179 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 27 06:51:03 crc kubenswrapper[4796]: I0127 06:51:03.859090 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 27 06:51:03 crc kubenswrapper[4796]: E0127 06:51:03.908074 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache]" Jan 27 06:51:03 crc kubenswrapper[4796]: I0127 06:51:03.947965 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 27 06:51:04 crc kubenswrapper[4796]: I0127 06:51:04.067458 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 27 06:51:04 crc kubenswrapper[4796]: I0127 06:51:04.235047 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 27 06:51:04 crc kubenswrapper[4796]: I0127 06:51:04.406836 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 27 06:51:04 crc kubenswrapper[4796]: I0127 06:51:04.588684 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 27 06:51:04 crc kubenswrapper[4796]: I0127 06:51:04.615890 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 27 06:51:04 crc kubenswrapper[4796]: I0127 06:51:04.730652 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 27 06:51:05 crc kubenswrapper[4796]: I0127 06:51:05.875171 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 27 06:51:06 crc kubenswrapper[4796]: I0127 06:51:06.166674 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 27 06:51:06 crc kubenswrapper[4796]: I0127 06:51:06.425283 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 06:51:06 crc kubenswrapper[4796]: I0127 06:51:06.425646 4796 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a2812cfb9691a7c44bd0ef1c191d22392e02dcf55d2bcf52aa142681ac496861" exitCode=137 Jan 27 06:51:06 crc kubenswrapper[4796]: I0127 06:51:06.566494 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 27 06:51:06 crc kubenswrapper[4796]: I0127 06:51:06.892635 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 06:51:06 crc kubenswrapper[4796]: I0127 06:51:06.892931 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.022834 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.023288 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.023539 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.023745 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.023920 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.023111 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.023398 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.023834 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.024083 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.025226 4796 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.025271 4796 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.025290 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.025308 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.037086 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.127063 4796 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.201128 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.435778 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.435890 4796 scope.go:117] "RemoveContainer" containerID="a2812cfb9691a7c44bd0ef1c191d22392e02dcf55d2bcf52aa142681ac496861" Jan 27 06:51:07 crc kubenswrapper[4796]: I0127 06:51:07.435991 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 27 06:51:08 crc kubenswrapper[4796]: I0127 06:51:08.758910 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 27 06:51:11 crc kubenswrapper[4796]: I0127 06:51:11.817697 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd"] Jan 27 06:51:11 crc kubenswrapper[4796]: I0127 06:51:11.819796 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" podUID="8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" containerName="controller-manager" containerID="cri-o://b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17" gracePeriod=30 Jan 27 06:51:11 crc kubenswrapper[4796]: I0127 06:51:11.912597 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27"] Jan 27 06:51:11 crc kubenswrapper[4796]: I0127 06:51:11.912868 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" podUID="4ac53e96-00de-4722-97eb-7137ef4431da" containerName="route-controller-manager" containerID="cri-o://b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e" gracePeriod=30 Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.240106 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.286469 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.343276 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-client-ca\") pod \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.343432 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-proxy-ca-bundles\") pod \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.343463 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-serving-cert\") pod \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.343518 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m29jg\" (UniqueName: \"kubernetes.io/projected/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-kube-api-access-m29jg\") pod \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.343593 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-config\") pod \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\" (UID: \"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25\") " Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.344416 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-client-ca" (OuterVolumeSpecName: "client-ca") pod "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" (UID: "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.344425 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" (UID: "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.344621 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-config" (OuterVolumeSpecName: "config") pod "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" (UID: "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.349629 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" (UID: "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.349893 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-kube-api-access-m29jg" (OuterVolumeSpecName: "kube-api-access-m29jg") pod "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" (UID: "8d02aeed-21ec-4659-80a0-b4e3b0e9bc25"). InnerVolumeSpecName "kube-api-access-m29jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.444980 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9lzr\" (UniqueName: \"kubernetes.io/projected/4ac53e96-00de-4722-97eb-7137ef4431da-kube-api-access-k9lzr\") pod \"4ac53e96-00de-4722-97eb-7137ef4431da\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.445040 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-config\") pod \"4ac53e96-00de-4722-97eb-7137ef4431da\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.445070 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-client-ca\") pod \"4ac53e96-00de-4722-97eb-7137ef4431da\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.445110 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac53e96-00de-4722-97eb-7137ef4431da-serving-cert\") pod \"4ac53e96-00de-4722-97eb-7137ef4431da\" (UID: \"4ac53e96-00de-4722-97eb-7137ef4431da\") " Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.445302 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.445316 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.445328 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.445342 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.445353 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m29jg\" (UniqueName: \"kubernetes.io/projected/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25-kube-api-access-m29jg\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.446072 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ac53e96-00de-4722-97eb-7137ef4431da" (UID: "4ac53e96-00de-4722-97eb-7137ef4431da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.446197 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-config" (OuterVolumeSpecName: "config") pod "4ac53e96-00de-4722-97eb-7137ef4431da" (UID: "4ac53e96-00de-4722-97eb-7137ef4431da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.448444 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac53e96-00de-4722-97eb-7137ef4431da-kube-api-access-k9lzr" (OuterVolumeSpecName: "kube-api-access-k9lzr") pod "4ac53e96-00de-4722-97eb-7137ef4431da" (UID: "4ac53e96-00de-4722-97eb-7137ef4431da"). InnerVolumeSpecName "kube-api-access-k9lzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.449719 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac53e96-00de-4722-97eb-7137ef4431da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ac53e96-00de-4722-97eb-7137ef4431da" (UID: "4ac53e96-00de-4722-97eb-7137ef4431da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.476784 4796 generic.go:334] "Generic (PLEG): container finished" podID="8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" containerID="b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17" exitCode=0 Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.476947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" event={"ID":"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25","Type":"ContainerDied","Data":"b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17"} Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.477020 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" event={"ID":"8d02aeed-21ec-4659-80a0-b4e3b0e9bc25","Type":"ContainerDied","Data":"c8437a87f12a8b349dbfcfc9bfdd0cfb89bb38fc6583da76af5c81a90cac9293"} Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.477050 4796 scope.go:117] "RemoveContainer" containerID="b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.476949 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.480135 4796 generic.go:334] "Generic (PLEG): container finished" podID="4ac53e96-00de-4722-97eb-7137ef4431da" containerID="b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e" exitCode=0 Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.480196 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" event={"ID":"4ac53e96-00de-4722-97eb-7137ef4431da","Type":"ContainerDied","Data":"b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e"} Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.480235 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" event={"ID":"4ac53e96-00de-4722-97eb-7137ef4431da","Type":"ContainerDied","Data":"701acdf24f0d41c7ce05165601e880a234f946016ba96730eac64591d5923bd0"} Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.480320 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.502109 4796 scope.go:117] "RemoveContainer" containerID="b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17" Jan 27 06:51:12 crc kubenswrapper[4796]: E0127 06:51:12.503047 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17\": container with ID starting with b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17 not found: ID does not exist" containerID="b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.503118 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17"} err="failed to get container status \"b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17\": rpc error: code = NotFound desc = could not find container \"b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17\": container with ID starting with b7fd09ca1539bc9b43bc21b6d07e2093ec43a25d7947c944eb5da30147e1fa17 not found: ID does not exist" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.503160 4796 scope.go:117] "RemoveContainer" containerID="b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.523387 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd"] Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.528454 4796 scope.go:117] "RemoveContainer" containerID="b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.531698 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bc8d6f54d-r5zzd"] Jan 27 06:51:12 crc kubenswrapper[4796]: E0127 06:51:12.532088 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e\": container with ID starting with b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e not found: ID does not exist" containerID="b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.532151 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e"} err="failed to get container status \"b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e\": rpc error: code = NotFound desc = could not find container \"b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e\": container with ID starting with b418cd56c17d6f8e65197144e14a7d5f620e832f41492f583ad1bc0438e7352e not found: ID does not exist" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.535352 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27"] Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.538028 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-79fd6c9864-wxp27"] Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.546071 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.546174 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ac53e96-00de-4722-97eb-7137ef4431da-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.546242 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9lzr\" (UniqueName: \"kubernetes.io/projected/4ac53e96-00de-4722-97eb-7137ef4431da-kube-api-access-k9lzr\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.546311 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ac53e96-00de-4722-97eb-7137ef4431da-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.760801 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac53e96-00de-4722-97eb-7137ef4431da" path="/var/lib/kubelet/pods/4ac53e96-00de-4722-97eb-7137ef4431da/volumes" Jan 27 06:51:12 crc kubenswrapper[4796]: I0127 06:51:12.762872 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" path="/var/lib/kubelet/pods/8d02aeed-21ec-4659-80a0-b4e3b0e9bc25/volumes" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.058784 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c47658fbf-r6zwf"] Jan 27 06:51:13 crc kubenswrapper[4796]: E0127 06:51:13.059195 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.059223 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 06:51:13 crc kubenswrapper[4796]: E0127 06:51:13.059252 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" containerName="controller-manager" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.059271 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" containerName="controller-manager" Jan 27 06:51:13 crc kubenswrapper[4796]: E0127 06:51:13.059310 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac53e96-00de-4722-97eb-7137ef4431da" containerName="route-controller-manager" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.059327 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac53e96-00de-4722-97eb-7137ef4431da" containerName="route-controller-manager" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.059832 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac53e96-00de-4722-97eb-7137ef4431da" containerName="route-controller-manager" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.059911 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d02aeed-21ec-4659-80a0-b4e3b0e9bc25" containerName="controller-manager" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.059936 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.061259 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.066504 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.067393 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh"] Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.069469 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.069798 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.070728 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.070817 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.070843 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.070727 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.078150 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.078220 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.078705 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.084354 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.084799 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.085070 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.092785 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c47658fbf-r6zwf"] Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.093088 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.107772 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh"] Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.157773 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw5w5\" (UniqueName: \"kubernetes.io/projected/88f307e5-e883-435e-820e-593003f0491e-kube-api-access-jw5w5\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.157867 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-client-ca\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.157912 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f307e5-e883-435e-820e-593003f0491e-config\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.158047 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-config\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.158152 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88f307e5-e883-435e-820e-593003f0491e-serving-cert\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.158181 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c123a8-bba0-4090-a9a5-33d724e01ff8-serving-cert\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.158240 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88f307e5-e883-435e-820e-593003f0491e-proxy-ca-bundles\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.158289 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88f307e5-e883-435e-820e-593003f0491e-client-ca\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.158325 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8v75\" (UniqueName: \"kubernetes.io/projected/d4c123a8-bba0-4090-a9a5-33d724e01ff8-kube-api-access-k8v75\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.259729 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88f307e5-e883-435e-820e-593003f0491e-proxy-ca-bundles\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.259789 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88f307e5-e883-435e-820e-593003f0491e-client-ca\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.259810 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8v75\" (UniqueName: \"kubernetes.io/projected/d4c123a8-bba0-4090-a9a5-33d724e01ff8-kube-api-access-k8v75\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.259842 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw5w5\" (UniqueName: \"kubernetes.io/projected/88f307e5-e883-435e-820e-593003f0491e-kube-api-access-jw5w5\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.259861 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-client-ca\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.259877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f307e5-e883-435e-820e-593003f0491e-config\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.259909 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-config\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.259927 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88f307e5-e883-435e-820e-593003f0491e-serving-cert\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.259941 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c123a8-bba0-4090-a9a5-33d724e01ff8-serving-cert\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.261648 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-client-ca\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.261726 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88f307e5-e883-435e-820e-593003f0491e-client-ca\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.261878 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-config\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.263452 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88f307e5-e883-435e-820e-593003f0491e-config\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.263943 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/88f307e5-e883-435e-820e-593003f0491e-proxy-ca-bundles\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.265045 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c123a8-bba0-4090-a9a5-33d724e01ff8-serving-cert\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.267636 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88f307e5-e883-435e-820e-593003f0491e-serving-cert\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.291305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8v75\" (UniqueName: \"kubernetes.io/projected/d4c123a8-bba0-4090-a9a5-33d724e01ff8-kube-api-access-k8v75\") pod \"route-controller-manager-897ffc96d-bd6sh\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.298075 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw5w5\" (UniqueName: \"kubernetes.io/projected/88f307e5-e883-435e-820e-593003f0491e-kube-api-access-jw5w5\") pod \"controller-manager-5c47658fbf-r6zwf\" (UID: \"88f307e5-e883-435e-820e-593003f0491e\") " pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.447625 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.459789 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.688451 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh"] Jan 27 06:51:13 crc kubenswrapper[4796]: I0127 06:51:13.985164 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c47658fbf-r6zwf"] Jan 27 06:51:13 crc kubenswrapper[4796]: W0127 06:51:13.988091 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f307e5_e883_435e_820e_593003f0491e.slice/crio-62a5af7862f2a1c03cf0601b2a315c82da8bcada988413943ae7869c84be3f72 WatchSource:0}: Error finding container 62a5af7862f2a1c03cf0601b2a315c82da8bcada988413943ae7869c84be3f72: Status 404 returned error can't find the container with id 62a5af7862f2a1c03cf0601b2a315c82da8bcada988413943ae7869c84be3f72 Jan 27 06:51:14 crc kubenswrapper[4796]: E0127 06:51:14.029338 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-a8f0c96fa9c29b4f30167f4d3e7c78ddd6ffe63dc6b3953421c6c27dfbec933c\": RecentStats: unable to find data in memory cache]" Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.497493 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" event={"ID":"d4c123a8-bba0-4090-a9a5-33d724e01ff8","Type":"ContainerStarted","Data":"ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234"} Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.497979 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" event={"ID":"d4c123a8-bba0-4090-a9a5-33d724e01ff8","Type":"ContainerStarted","Data":"3fc1ad1f36e28f4c31d62eb5dffcc1703149b43c65335503fc3406deb58465b6"} Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.498006 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.499259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" event={"ID":"88f307e5-e883-435e-820e-593003f0491e","Type":"ContainerStarted","Data":"709e2d7258e9d6ebb325a43b3f57dc3f7189c286c97d1691899c1c01896dd4e8"} Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.499316 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" event={"ID":"88f307e5-e883-435e-820e-593003f0491e","Type":"ContainerStarted","Data":"62a5af7862f2a1c03cf0601b2a315c82da8bcada988413943ae7869c84be3f72"} Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.499507 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.503636 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.506418 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.521131 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" podStartSLOduration=3.52111121 podStartE2EDuration="3.52111121s" podCreationTimestamp="2026-01-27 06:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:51:14.519861022 +0000 UTC m=+295.626828369" watchObservedRunningTime="2026-01-27 06:51:14.52111121 +0000 UTC m=+295.628078537" Jan 27 06:51:14 crc kubenswrapper[4796]: I0127 06:51:14.573204 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c47658fbf-r6zwf" podStartSLOduration=3.573185372 podStartE2EDuration="3.573185372s" podCreationTimestamp="2026-01-27 06:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:51:14.572014696 +0000 UTC m=+295.678982023" watchObservedRunningTime="2026-01-27 06:51:14.573185372 +0000 UTC m=+295.680152699" Jan 27 06:51:20 crc kubenswrapper[4796]: I0127 06:51:20.316166 4796 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 27 06:51:20 crc kubenswrapper[4796]: I0127 06:51:20.534513 4796 generic.go:334] "Generic (PLEG): container finished" podID="c009d452-642e-47de-994c-cc6e0af791f9" containerID="e9ee7ef0fe9e8891ad3a78a721d86719309131587f256048558d231cbe5943e1" exitCode=0 Jan 27 06:51:20 crc kubenswrapper[4796]: I0127 06:51:20.534606 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" event={"ID":"c009d452-642e-47de-994c-cc6e0af791f9","Type":"ContainerDied","Data":"e9ee7ef0fe9e8891ad3a78a721d86719309131587f256048558d231cbe5943e1"} Jan 27 06:51:20 crc kubenswrapper[4796]: I0127 06:51:20.535636 4796 scope.go:117] "RemoveContainer" containerID="e9ee7ef0fe9e8891ad3a78a721d86719309131587f256048558d231cbe5943e1" Jan 27 06:51:21 crc kubenswrapper[4796]: I0127 06:51:21.542608 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" event={"ID":"c009d452-642e-47de-994c-cc6e0af791f9","Type":"ContainerStarted","Data":"089147d22a80bc60a05af262dceaa58388ef7515590f16920426b7fd80b9a461"} Jan 27 06:51:21 crc kubenswrapper[4796]: I0127 06:51:21.543467 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:51:21 crc kubenswrapper[4796]: I0127 06:51:21.544611 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:51:31 crc kubenswrapper[4796]: I0127 06:51:31.816186 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh"] Jan 27 06:51:31 crc kubenswrapper[4796]: I0127 06:51:31.817082 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" podUID="d4c123a8-bba0-4090-a9a5-33d724e01ff8" containerName="route-controller-manager" containerID="cri-o://ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234" gracePeriod=30 Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.278440 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.418335 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8v75\" (UniqueName: \"kubernetes.io/projected/d4c123a8-bba0-4090-a9a5-33d724e01ff8-kube-api-access-k8v75\") pod \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.418427 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-client-ca\") pod \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.418478 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c123a8-bba0-4090-a9a5-33d724e01ff8-serving-cert\") pod \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.418566 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-config\") pod \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\" (UID: \"d4c123a8-bba0-4090-a9a5-33d724e01ff8\") " Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.419442 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-config" (OuterVolumeSpecName: "config") pod "d4c123a8-bba0-4090-a9a5-33d724e01ff8" (UID: "d4c123a8-bba0-4090-a9a5-33d724e01ff8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.419691 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4c123a8-bba0-4090-a9a5-33d724e01ff8" (UID: "d4c123a8-bba0-4090-a9a5-33d724e01ff8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.423427 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c123a8-bba0-4090-a9a5-33d724e01ff8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4c123a8-bba0-4090-a9a5-33d724e01ff8" (UID: "d4c123a8-bba0-4090-a9a5-33d724e01ff8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.424858 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c123a8-bba0-4090-a9a5-33d724e01ff8-kube-api-access-k8v75" (OuterVolumeSpecName: "kube-api-access-k8v75") pod "d4c123a8-bba0-4090-a9a5-33d724e01ff8" (UID: "d4c123a8-bba0-4090-a9a5-33d724e01ff8"). InnerVolumeSpecName "kube-api-access-k8v75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.520700 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.520777 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4c123a8-bba0-4090-a9a5-33d724e01ff8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.520791 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4c123a8-bba0-4090-a9a5-33d724e01ff8-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.520816 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8v75\" (UniqueName: \"kubernetes.io/projected/d4c123a8-bba0-4090-a9a5-33d724e01ff8-kube-api-access-k8v75\") on node \"crc\" DevicePath \"\"" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.611012 4796 generic.go:334] "Generic (PLEG): container finished" podID="d4c123a8-bba0-4090-a9a5-33d724e01ff8" containerID="ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234" exitCode=0 Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.611054 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" event={"ID":"d4c123a8-bba0-4090-a9a5-33d724e01ff8","Type":"ContainerDied","Data":"ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234"} Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.611084 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" event={"ID":"d4c123a8-bba0-4090-a9a5-33d724e01ff8","Type":"ContainerDied","Data":"3fc1ad1f36e28f4c31d62eb5dffcc1703149b43c65335503fc3406deb58465b6"} Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.611102 4796 scope.go:117] "RemoveContainer" containerID="ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.611106 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.629038 4796 scope.go:117] "RemoveContainer" containerID="ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234" Jan 27 06:51:32 crc kubenswrapper[4796]: E0127 06:51:32.629529 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234\": container with ID starting with ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234 not found: ID does not exist" containerID="ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.629593 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234"} err="failed to get container status \"ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234\": rpc error: code = NotFound desc = could not find container \"ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234\": container with ID starting with ae7690a9005ecfbee74afe5170d5fc576f5f088f27f754c189b0282d7778a234 not found: ID does not exist" Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.646739 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh"] Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.650705 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-897ffc96d-bd6sh"] Jan 27 06:51:32 crc kubenswrapper[4796]: I0127 06:51:32.757707 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4c123a8-bba0-4090-a9a5-33d724e01ff8" path="/var/lib/kubelet/pods/d4c123a8-bba0-4090-a9a5-33d724e01ff8/volumes" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.077509 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz"] Jan 27 06:51:33 crc kubenswrapper[4796]: E0127 06:51:33.082681 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c123a8-bba0-4090-a9a5-33d724e01ff8" containerName="route-controller-manager" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.082915 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c123a8-bba0-4090-a9a5-33d724e01ff8" containerName="route-controller-manager" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.083269 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c123a8-bba0-4090-a9a5-33d724e01ff8" containerName="route-controller-manager" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.084084 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.087035 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.088106 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.088734 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.089371 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.090073 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.090602 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.092598 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz"] Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.233369 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-client-ca\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.233877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-serving-cert\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.233933 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cj87\" (UniqueName: \"kubernetes.io/projected/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-kube-api-access-5cj87\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.233998 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-config\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.335395 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-client-ca\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.335524 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-serving-cert\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.335603 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cj87\" (UniqueName: \"kubernetes.io/projected/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-kube-api-access-5cj87\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.335651 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-config\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.337418 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-client-ca\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.337576 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-config\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.343258 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-serving-cert\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.368798 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cj87\" (UniqueName: \"kubernetes.io/projected/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-kube-api-access-5cj87\") pod \"route-controller-manager-7cdbf6b485-tnfhz\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.459634 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:33 crc kubenswrapper[4796]: I0127 06:51:33.911911 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz"] Jan 27 06:51:33 crc kubenswrapper[4796]: W0127 06:51:33.914641 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57c6fb6d_3f1c_4ea5_aa1b_8fa8f3674507.slice/crio-f7ef5c97c532aa572dda9c2805843ec2b35b6dcdaa5d006a75d7384473df792d WatchSource:0}: Error finding container f7ef5c97c532aa572dda9c2805843ec2b35b6dcdaa5d006a75d7384473df792d: Status 404 returned error can't find the container with id f7ef5c97c532aa572dda9c2805843ec2b35b6dcdaa5d006a75d7384473df792d Jan 27 06:51:34 crc kubenswrapper[4796]: I0127 06:51:34.624990 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" event={"ID":"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507","Type":"ContainerStarted","Data":"ce66caf1c495b6d1881e34c32c2d713372d40d2a928bc58c2f8be1a1f5e4d6ad"} Jan 27 06:51:34 crc kubenswrapper[4796]: I0127 06:51:34.625042 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" event={"ID":"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507","Type":"ContainerStarted","Data":"f7ef5c97c532aa572dda9c2805843ec2b35b6dcdaa5d006a75d7384473df792d"} Jan 27 06:51:34 crc kubenswrapper[4796]: I0127 06:51:34.625814 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:34 crc kubenswrapper[4796]: I0127 06:51:34.633826 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:51:34 crc kubenswrapper[4796]: I0127 06:51:34.640267 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" podStartSLOduration=3.6402524400000003 podStartE2EDuration="3.64025244s" podCreationTimestamp="2026-01-27 06:51:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:51:34.639634062 +0000 UTC m=+315.746601389" watchObservedRunningTime="2026-01-27 06:51:34.64025244 +0000 UTC m=+315.747219767" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.113212 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8r59"] Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.115493 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j8r59" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerName="registry-server" containerID="cri-o://145684e338d0a3acbab02ae51c8304b1314dc11007f526099b98c4089bde69f8" gracePeriod=30 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.125868 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cv5hm"] Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.126287 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cv5hm" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerName="registry-server" containerID="cri-o://0eeed9873078fe1ad09ba89d37c1845d14cf24e8f115de55365945b20d7ce3a3" gracePeriod=30 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.144430 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qhgrn"] Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.144681 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" podUID="c009d452-642e-47de-994c-cc6e0af791f9" containerName="marketplace-operator" containerID="cri-o://089147d22a80bc60a05af262dceaa58388ef7515590f16920426b7fd80b9a461" gracePeriod=30 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.154678 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lc8b"] Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.155092 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8lc8b" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerName="registry-server" containerID="cri-o://fbbdae8cd5e56f50a5c6abb80372e700e4368c3bf10f314c2dc207f5499c2527" gracePeriod=30 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.164924 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mn94j"] Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.165229 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mn94j" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" containerName="registry-server" containerID="cri-o://b69e0f63a0b8ef774051df55b3c95eb1838265c411b21561b246fea52b63892a" gracePeriod=30 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.170074 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8dx6h"] Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.170944 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.174825 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8dx6h"] Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.280728 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e797ebfa-a82a-42f5-883f-68d70ae80e7f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8dx6h\" (UID: \"e797ebfa-a82a-42f5-883f-68d70ae80e7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.280800 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e797ebfa-a82a-42f5-883f-68d70ae80e7f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8dx6h\" (UID: \"e797ebfa-a82a-42f5-883f-68d70ae80e7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.280925 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7hxr\" (UniqueName: \"kubernetes.io/projected/e797ebfa-a82a-42f5-883f-68d70ae80e7f-kube-api-access-t7hxr\") pod \"marketplace-operator-79b997595-8dx6h\" (UID: \"e797ebfa-a82a-42f5-883f-68d70ae80e7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.382513 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7hxr\" (UniqueName: \"kubernetes.io/projected/e797ebfa-a82a-42f5-883f-68d70ae80e7f-kube-api-access-t7hxr\") pod \"marketplace-operator-79b997595-8dx6h\" (UID: \"e797ebfa-a82a-42f5-883f-68d70ae80e7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.382618 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e797ebfa-a82a-42f5-883f-68d70ae80e7f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8dx6h\" (UID: \"e797ebfa-a82a-42f5-883f-68d70ae80e7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.382663 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e797ebfa-a82a-42f5-883f-68d70ae80e7f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8dx6h\" (UID: \"e797ebfa-a82a-42f5-883f-68d70ae80e7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.384455 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e797ebfa-a82a-42f5-883f-68d70ae80e7f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-8dx6h\" (UID: \"e797ebfa-a82a-42f5-883f-68d70ae80e7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.394068 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e797ebfa-a82a-42f5-883f-68d70ae80e7f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-8dx6h\" (UID: \"e797ebfa-a82a-42f5-883f-68d70ae80e7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.398458 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7hxr\" (UniqueName: \"kubernetes.io/projected/e797ebfa-a82a-42f5-883f-68d70ae80e7f-kube-api-access-t7hxr\") pod \"marketplace-operator-79b997595-8dx6h\" (UID: \"e797ebfa-a82a-42f5-883f-68d70ae80e7f\") " pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.503357 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.717383 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-8dx6h"] Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.968897 4796 generic.go:334] "Generic (PLEG): container finished" podID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerID="fbbdae8cd5e56f50a5c6abb80372e700e4368c3bf10f314c2dc207f5499c2527" exitCode=0 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.969053 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lc8b" event={"ID":"47a26ac3-524b-47c5-abb8-d2c4837659e7","Type":"ContainerDied","Data":"fbbdae8cd5e56f50a5c6abb80372e700e4368c3bf10f314c2dc207f5499c2527"} Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.970979 4796 generic.go:334] "Generic (PLEG): container finished" podID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerID="145684e338d0a3acbab02ae51c8304b1314dc11007f526099b98c4089bde69f8" exitCode=0 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.971031 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r59" event={"ID":"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae","Type":"ContainerDied","Data":"145684e338d0a3acbab02ae51c8304b1314dc11007f526099b98c4089bde69f8"} Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.972907 4796 generic.go:334] "Generic (PLEG): container finished" podID="ae643542-ca5e-4cee-aaba-818f3d424763" containerID="b69e0f63a0b8ef774051df55b3c95eb1838265c411b21561b246fea52b63892a" exitCode=0 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.972970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn94j" event={"ID":"ae643542-ca5e-4cee-aaba-818f3d424763","Type":"ContainerDied","Data":"b69e0f63a0b8ef774051df55b3c95eb1838265c411b21561b246fea52b63892a"} Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.975346 4796 generic.go:334] "Generic (PLEG): container finished" podID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerID="0eeed9873078fe1ad09ba89d37c1845d14cf24e8f115de55365945b20d7ce3a3" exitCode=0 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.975396 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cv5hm" event={"ID":"242ef06f-796a-4c77-810b-bde4a5fbc087","Type":"ContainerDied","Data":"0eeed9873078fe1ad09ba89d37c1845d14cf24e8f115de55365945b20d7ce3a3"} Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.977128 4796 generic.go:334] "Generic (PLEG): container finished" podID="c009d452-642e-47de-994c-cc6e0af791f9" containerID="089147d22a80bc60a05af262dceaa58388ef7515590f16920426b7fd80b9a461" exitCode=0 Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.977177 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" event={"ID":"c009d452-642e-47de-994c-cc6e0af791f9","Type":"ContainerDied","Data":"089147d22a80bc60a05af262dceaa58388ef7515590f16920426b7fd80b9a461"} Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.977200 4796 scope.go:117] "RemoveContainer" containerID="e9ee7ef0fe9e8891ad3a78a721d86719309131587f256048558d231cbe5943e1" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.979568 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" event={"ID":"e797ebfa-a82a-42f5-883f-68d70ae80e7f","Type":"ContainerStarted","Data":"dbf3cfc007ecaf16d36dc37271800e62d30f8df17c823b63a3f4ed71bf6d358d"} Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.979613 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" event={"ID":"e797ebfa-a82a-42f5-883f-68d70ae80e7f","Type":"ContainerStarted","Data":"154ac9cb1762b38ea5a438736720f5b8df6540ad822644ca3becfc7b11de71f5"} Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.979841 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.981030 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-8dx6h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Jan 27 06:52:26 crc kubenswrapper[4796]: I0127 06:52:26.981075 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" podUID="e797ebfa-a82a-42f5-883f-68d70ae80e7f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.002495 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" podStartSLOduration=1.00246957 podStartE2EDuration="1.00246957s" podCreationTimestamp="2026-01-27 06:52:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:52:26.999360706 +0000 UTC m=+368.106328043" watchObservedRunningTime="2026-01-27 06:52:27.00246957 +0000 UTC m=+368.109436897" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.043238 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.126207 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.135054 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.144236 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.162412 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.192879 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-utilities\") pod \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.192949 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79thm\" (UniqueName: \"kubernetes.io/projected/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-kube-api-access-79thm\") pod \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.193013 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-catalog-content\") pod \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\" (UID: \"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.193640 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-utilities" (OuterVolumeSpecName: "utilities") pod "25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" (UID: "25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.199918 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-kube-api-access-79thm" (OuterVolumeSpecName: "kube-api-access-79thm") pod "25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" (UID: "25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae"). InnerVolumeSpecName "kube-api-access-79thm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.270064 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" (UID: "25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294649 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-catalog-content\") pod \"242ef06f-796a-4c77-810b-bde4a5fbc087\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294706 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-utilities\") pod \"ae643542-ca5e-4cee-aaba-818f3d424763\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294776 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8dws\" (UniqueName: \"kubernetes.io/projected/c009d452-642e-47de-994c-cc6e0af791f9-kube-api-access-d8dws\") pod \"c009d452-642e-47de-994c-cc6e0af791f9\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294799 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-trusted-ca\") pod \"c009d452-642e-47de-994c-cc6e0af791f9\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294826 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-utilities\") pod \"47a26ac3-524b-47c5-abb8-d2c4837659e7\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294847 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-utilities\") pod \"242ef06f-796a-4c77-810b-bde4a5fbc087\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294877 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blrrh\" (UniqueName: \"kubernetes.io/projected/ae643542-ca5e-4cee-aaba-818f3d424763-kube-api-access-blrrh\") pod \"ae643542-ca5e-4cee-aaba-818f3d424763\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294895 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g55k\" (UniqueName: \"kubernetes.io/projected/242ef06f-796a-4c77-810b-bde4a5fbc087-kube-api-access-6g55k\") pod \"242ef06f-796a-4c77-810b-bde4a5fbc087\" (UID: \"242ef06f-796a-4c77-810b-bde4a5fbc087\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294910 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-catalog-content\") pod \"47a26ac3-524b-47c5-abb8-d2c4837659e7\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294929 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-operator-metrics\") pod \"c009d452-642e-47de-994c-cc6e0af791f9\" (UID: \"c009d452-642e-47de-994c-cc6e0af791f9\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294943 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-catalog-content\") pod \"ae643542-ca5e-4cee-aaba-818f3d424763\" (UID: \"ae643542-ca5e-4cee-aaba-818f3d424763\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.294967 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7trsn\" (UniqueName: \"kubernetes.io/projected/47a26ac3-524b-47c5-abb8-d2c4837659e7-kube-api-access-7trsn\") pod \"47a26ac3-524b-47c5-abb8-d2c4837659e7\" (UID: \"47a26ac3-524b-47c5-abb8-d2c4837659e7\") " Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.295142 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.295153 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79thm\" (UniqueName: \"kubernetes.io/projected/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-kube-api-access-79thm\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.295163 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.296051 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-utilities" (OuterVolumeSpecName: "utilities") pod "242ef06f-796a-4c77-810b-bde4a5fbc087" (UID: "242ef06f-796a-4c77-810b-bde4a5fbc087"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.296867 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-utilities" (OuterVolumeSpecName: "utilities") pod "ae643542-ca5e-4cee-aaba-818f3d424763" (UID: "ae643542-ca5e-4cee-aaba-818f3d424763"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.299542 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a26ac3-524b-47c5-abb8-d2c4837659e7-kube-api-access-7trsn" (OuterVolumeSpecName: "kube-api-access-7trsn") pod "47a26ac3-524b-47c5-abb8-d2c4837659e7" (UID: "47a26ac3-524b-47c5-abb8-d2c4837659e7"). InnerVolumeSpecName "kube-api-access-7trsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.299868 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c009d452-642e-47de-994c-cc6e0af791f9" (UID: "c009d452-642e-47de-994c-cc6e0af791f9"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.301023 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-utilities" (OuterVolumeSpecName: "utilities") pod "47a26ac3-524b-47c5-abb8-d2c4837659e7" (UID: "47a26ac3-524b-47c5-abb8-d2c4837659e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.301811 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c009d452-642e-47de-994c-cc6e0af791f9" (UID: "c009d452-642e-47de-994c-cc6e0af791f9"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.302596 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae643542-ca5e-4cee-aaba-818f3d424763-kube-api-access-blrrh" (OuterVolumeSpecName: "kube-api-access-blrrh") pod "ae643542-ca5e-4cee-aaba-818f3d424763" (UID: "ae643542-ca5e-4cee-aaba-818f3d424763"). InnerVolumeSpecName "kube-api-access-blrrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.304406 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242ef06f-796a-4c77-810b-bde4a5fbc087-kube-api-access-6g55k" (OuterVolumeSpecName: "kube-api-access-6g55k") pod "242ef06f-796a-4c77-810b-bde4a5fbc087" (UID: "242ef06f-796a-4c77-810b-bde4a5fbc087"). InnerVolumeSpecName "kube-api-access-6g55k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.316079 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c009d452-642e-47de-994c-cc6e0af791f9-kube-api-access-d8dws" (OuterVolumeSpecName: "kube-api-access-d8dws") pod "c009d452-642e-47de-994c-cc6e0af791f9" (UID: "c009d452-642e-47de-994c-cc6e0af791f9"). InnerVolumeSpecName "kube-api-access-d8dws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.344613 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47a26ac3-524b-47c5-abb8-d2c4837659e7" (UID: "47a26ac3-524b-47c5-abb8-d2c4837659e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.352813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "242ef06f-796a-4c77-810b-bde4a5fbc087" (UID: "242ef06f-796a-4c77-810b-bde4a5fbc087"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396647 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7trsn\" (UniqueName: \"kubernetes.io/projected/47a26ac3-524b-47c5-abb8-d2c4837659e7-kube-api-access-7trsn\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396694 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396704 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396713 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8dws\" (UniqueName: \"kubernetes.io/projected/c009d452-642e-47de-994c-cc6e0af791f9-kube-api-access-d8dws\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396725 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396735 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396743 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/242ef06f-796a-4c77-810b-bde4a5fbc087-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396752 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blrrh\" (UniqueName: \"kubernetes.io/projected/ae643542-ca5e-4cee-aaba-818f3d424763-kube-api-access-blrrh\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396760 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g55k\" (UniqueName: \"kubernetes.io/projected/242ef06f-796a-4c77-810b-bde4a5fbc087-kube-api-access-6g55k\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396768 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47a26ac3-524b-47c5-abb8-d2c4837659e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.396776 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c009d452-642e-47de-994c-cc6e0af791f9-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.423001 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae643542-ca5e-4cee-aaba-818f3d424763" (UID: "ae643542-ca5e-4cee-aaba-818f3d424763"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.497464 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae643542-ca5e-4cee-aaba-818f3d424763-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.992240 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" event={"ID":"c009d452-642e-47de-994c-cc6e0af791f9","Type":"ContainerDied","Data":"afc806a2c32b84b87e7c7ff21145c9d82d640a01d01d6af6a080990d60659c33"} Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.992655 4796 scope.go:117] "RemoveContainer" containerID="089147d22a80bc60a05af262dceaa58388ef7515590f16920426b7fd80b9a461" Jan 27 06:52:27 crc kubenswrapper[4796]: I0127 06:52:27.992406 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qhgrn" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.003098 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8r59" event={"ID":"25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae","Type":"ContainerDied","Data":"04d2bd96db3fbd4a0cd9a864af84bd6fb0e38e5b573683956d5e28608c5cce99"} Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.003299 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8r59" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.010875 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8lc8b" event={"ID":"47a26ac3-524b-47c5-abb8-d2c4837659e7","Type":"ContainerDied","Data":"acc3ab06f32197b8362333c6a8ff04de312864dcb7411359653cceedb5acf84a"} Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.011050 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8lc8b" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.016072 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mn94j" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.016204 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mn94j" event={"ID":"ae643542-ca5e-4cee-aaba-818f3d424763","Type":"ContainerDied","Data":"d5f04050158d8535553945106832094abf89eeaabbba31534b93cc6e01669eeb"} Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.024149 4796 scope.go:117] "RemoveContainer" containerID="145684e338d0a3acbab02ae51c8304b1314dc11007f526099b98c4089bde69f8" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.041129 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cv5hm" event={"ID":"242ef06f-796a-4c77-810b-bde4a5fbc087","Type":"ContainerDied","Data":"c951f899254af16dcebbdb105e4675d5880438c3391b4ceaa1703b38b1ef9f05"} Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.043164 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cv5hm" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.046701 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-8dx6h" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.058192 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8r59"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.062520 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j8r59"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.078866 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lc8b"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.086140 4796 scope.go:117] "RemoveContainer" containerID="fc92cc306f7f9c5e1c65787d5dfdbeec683cde2a5f06c24dfd3927719587931d" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.091781 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8lc8b"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.097424 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mn94j"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.100580 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mn94j"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.113101 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qhgrn"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.113159 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qhgrn"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.113174 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cv5hm"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.115825 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cv5hm"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.136696 4796 scope.go:117] "RemoveContainer" containerID="30788984015f3669b8d1b3697085641ab64878b7b4fc16bf33709dcad9f9af67" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.157242 4796 scope.go:117] "RemoveContainer" containerID="fbbdae8cd5e56f50a5c6abb80372e700e4368c3bf10f314c2dc207f5499c2527" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.174960 4796 scope.go:117] "RemoveContainer" containerID="b8b93df8d9345c3ebb46c71e301f48962cbfa22768612e1d4fe069614fb27259" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.192387 4796 scope.go:117] "RemoveContainer" containerID="3b1074f4e3ad0d39594354e6ae5b575c9eef39a76881fe1e5c2333f5668645d7" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.206409 4796 scope.go:117] "RemoveContainer" containerID="b69e0f63a0b8ef774051df55b3c95eb1838265c411b21561b246fea52b63892a" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.221608 4796 scope.go:117] "RemoveContainer" containerID="93e30ccba62222b7c0be36caeb46537aae4409941ddfafbb8c2842795a723677" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.261492 4796 scope.go:117] "RemoveContainer" containerID="26e7b8ba591ed48d7a1a28141f5bf8dfd8f8a420ac08645cf6b279479662c366" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.283150 4796 scope.go:117] "RemoveContainer" containerID="0eeed9873078fe1ad09ba89d37c1845d14cf24e8f115de55365945b20d7ce3a3" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.301576 4796 scope.go:117] "RemoveContainer" containerID="cf3074d64e7ca8c6d48cd194a94ed43a140975694438fefcb4a2458a4197c9f4" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.315624 4796 scope.go:117] "RemoveContainer" containerID="2c924eecca1b1fac5921a41c093b6aca85e75f193d3273a172358603fc6b3eb4" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535511 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2grzv"] Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535772 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c009d452-642e-47de-994c-cc6e0af791f9" containerName="marketplace-operator" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535788 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c009d452-642e-47de-994c-cc6e0af791f9" containerName="marketplace-operator" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535801 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerName="extract-content" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535807 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerName="extract-content" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535814 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535820 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535826 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerName="extract-content" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535832 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerName="extract-content" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535838 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerName="extract-utilities" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535844 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerName="extract-utilities" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535852 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerName="extract-utilities" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535857 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerName="extract-utilities" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535865 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerName="extract-content" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535872 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerName="extract-content" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535880 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerName="extract-utilities" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535885 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerName="extract-utilities" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535893 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535899 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535906 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535911 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535920 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535926 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535936 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" containerName="extract-utilities" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535942 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" containerName="extract-utilities" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.535950 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" containerName="extract-content" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.535956 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" containerName="extract-content" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.536049 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c009d452-642e-47de-994c-cc6e0af791f9" containerName="marketplace-operator" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.536061 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.536067 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.536074 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.536081 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" containerName="registry-server" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.536089 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c009d452-642e-47de-994c-cc6e0af791f9" containerName="marketplace-operator" Jan 27 06:52:28 crc kubenswrapper[4796]: E0127 06:52:28.536165 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c009d452-642e-47de-994c-cc6e0af791f9" containerName="marketplace-operator" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.536172 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c009d452-642e-47de-994c-cc6e0af791f9" containerName="marketplace-operator" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.536838 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.542509 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.549821 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2grzv"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.715403 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8-utilities\") pod \"redhat-marketplace-2grzv\" (UID: \"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8\") " pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.715555 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8-catalog-content\") pod \"redhat-marketplace-2grzv\" (UID: \"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8\") " pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.715810 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7nb\" (UniqueName: \"kubernetes.io/projected/3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8-kube-api-access-7j7nb\") pod \"redhat-marketplace-2grzv\" (UID: \"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8\") " pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.733382 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j2snq"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.736604 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.740591 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.744233 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2snq"] Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.753270 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="242ef06f-796a-4c77-810b-bde4a5fbc087" path="/var/lib/kubelet/pods/242ef06f-796a-4c77-810b-bde4a5fbc087/volumes" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.754045 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae" path="/var/lib/kubelet/pods/25bb764a-c021-4c8b-b0d4-da2f5ed8a4ae/volumes" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.754728 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a26ac3-524b-47c5-abb8-d2c4837659e7" path="/var/lib/kubelet/pods/47a26ac3-524b-47c5-abb8-d2c4837659e7/volumes" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.755823 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae643542-ca5e-4cee-aaba-818f3d424763" path="/var/lib/kubelet/pods/ae643542-ca5e-4cee-aaba-818f3d424763/volumes" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.756488 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c009d452-642e-47de-994c-cc6e0af791f9" path="/var/lib/kubelet/pods/c009d452-642e-47de-994c-cc6e0af791f9/volumes" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.817383 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8-catalog-content\") pod \"redhat-marketplace-2grzv\" (UID: \"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8\") " pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.817469 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j7nb\" (UniqueName: \"kubernetes.io/projected/3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8-kube-api-access-7j7nb\") pod \"redhat-marketplace-2grzv\" (UID: \"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8\") " pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.817560 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8-utilities\") pod \"redhat-marketplace-2grzv\" (UID: \"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8\") " pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.818196 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8-utilities\") pod \"redhat-marketplace-2grzv\" (UID: \"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8\") " pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.818200 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8-catalog-content\") pod \"redhat-marketplace-2grzv\" (UID: \"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8\") " pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.838121 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j7nb\" (UniqueName: \"kubernetes.io/projected/3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8-kube-api-access-7j7nb\") pod \"redhat-marketplace-2grzv\" (UID: \"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8\") " pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.868322 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.919057 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16325da-a1d2-4dd0-bd2c-f3b0c90db131-catalog-content\") pod \"redhat-operators-j2snq\" (UID: \"e16325da-a1d2-4dd0-bd2c-f3b0c90db131\") " pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.919148 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16325da-a1d2-4dd0-bd2c-f3b0c90db131-utilities\") pod \"redhat-operators-j2snq\" (UID: \"e16325da-a1d2-4dd0-bd2c-f3b0c90db131\") " pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:28 crc kubenswrapper[4796]: I0127 06:52:28.919172 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84jm6\" (UniqueName: \"kubernetes.io/projected/e16325da-a1d2-4dd0-bd2c-f3b0c90db131-kube-api-access-84jm6\") pod \"redhat-operators-j2snq\" (UID: \"e16325da-a1d2-4dd0-bd2c-f3b0c90db131\") " pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:29 crc kubenswrapper[4796]: I0127 06:52:29.020509 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16325da-a1d2-4dd0-bd2c-f3b0c90db131-catalog-content\") pod \"redhat-operators-j2snq\" (UID: \"e16325da-a1d2-4dd0-bd2c-f3b0c90db131\") " pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:29 crc kubenswrapper[4796]: I0127 06:52:29.020600 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16325da-a1d2-4dd0-bd2c-f3b0c90db131-utilities\") pod \"redhat-operators-j2snq\" (UID: \"e16325da-a1d2-4dd0-bd2c-f3b0c90db131\") " pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:29 crc kubenswrapper[4796]: I0127 06:52:29.020619 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84jm6\" (UniqueName: \"kubernetes.io/projected/e16325da-a1d2-4dd0-bd2c-f3b0c90db131-kube-api-access-84jm6\") pod \"redhat-operators-j2snq\" (UID: \"e16325da-a1d2-4dd0-bd2c-f3b0c90db131\") " pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:29 crc kubenswrapper[4796]: I0127 06:52:29.021122 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e16325da-a1d2-4dd0-bd2c-f3b0c90db131-catalog-content\") pod \"redhat-operators-j2snq\" (UID: \"e16325da-a1d2-4dd0-bd2c-f3b0c90db131\") " pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:29 crc kubenswrapper[4796]: I0127 06:52:29.021204 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e16325da-a1d2-4dd0-bd2c-f3b0c90db131-utilities\") pod \"redhat-operators-j2snq\" (UID: \"e16325da-a1d2-4dd0-bd2c-f3b0c90db131\") " pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:29 crc kubenswrapper[4796]: I0127 06:52:29.041590 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84jm6\" (UniqueName: \"kubernetes.io/projected/e16325da-a1d2-4dd0-bd2c-f3b0c90db131-kube-api-access-84jm6\") pod \"redhat-operators-j2snq\" (UID: \"e16325da-a1d2-4dd0-bd2c-f3b0c90db131\") " pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:29 crc kubenswrapper[4796]: I0127 06:52:29.063632 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:29 crc kubenswrapper[4796]: I0127 06:52:29.109184 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2grzv"] Jan 27 06:52:29 crc kubenswrapper[4796]: W0127 06:52:29.121864 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dbdcaa5_dd05_4b9c_95a9_5bb30f2d09f8.slice/crio-3c78ab4663122a35051f1f621cd639a1c56f2b5e3724fcf9c6026643ec1c8d19 WatchSource:0}: Error finding container 3c78ab4663122a35051f1f621cd639a1c56f2b5e3724fcf9c6026643ec1c8d19: Status 404 returned error can't find the container with id 3c78ab4663122a35051f1f621cd639a1c56f2b5e3724fcf9c6026643ec1c8d19 Jan 27 06:52:29 crc kubenswrapper[4796]: I0127 06:52:29.327714 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j2snq"] Jan 27 06:52:29 crc kubenswrapper[4796]: W0127 06:52:29.386855 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode16325da_a1d2_4dd0_bd2c_f3b0c90db131.slice/crio-71e52e0417527502164a2de83eb97e7349f2b169ac3cdb3e96525197902b24b5 WatchSource:0}: Error finding container 71e52e0417527502164a2de83eb97e7349f2b169ac3cdb3e96525197902b24b5: Status 404 returned error can't find the container with id 71e52e0417527502164a2de83eb97e7349f2b169ac3cdb3e96525197902b24b5 Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.083182 4796 generic.go:334] "Generic (PLEG): container finished" podID="e16325da-a1d2-4dd0-bd2c-f3b0c90db131" containerID="d21b66ab21188386524727546b877bebc1115184648ce805658946daf68ce115" exitCode=0 Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.083684 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2snq" event={"ID":"e16325da-a1d2-4dd0-bd2c-f3b0c90db131","Type":"ContainerDied","Data":"d21b66ab21188386524727546b877bebc1115184648ce805658946daf68ce115"} Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.083714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2snq" event={"ID":"e16325da-a1d2-4dd0-bd2c-f3b0c90db131","Type":"ContainerStarted","Data":"71e52e0417527502164a2de83eb97e7349f2b169ac3cdb3e96525197902b24b5"} Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.084795 4796 generic.go:334] "Generic (PLEG): container finished" podID="3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8" containerID="b329dc2fff00877b3f203a80daf6a850cd96eac7d48a0f86d3762fa2a37bfa6f" exitCode=0 Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.084815 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2grzv" event={"ID":"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8","Type":"ContainerDied","Data":"b329dc2fff00877b3f203a80daf6a850cd96eac7d48a0f86d3762fa2a37bfa6f"} Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.084828 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2grzv" event={"ID":"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8","Type":"ContainerStarted","Data":"3c78ab4663122a35051f1f621cd639a1c56f2b5e3724fcf9c6026643ec1c8d19"} Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.930755 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z8mzp"] Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.932086 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.934159 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 27 06:52:30 crc kubenswrapper[4796]: I0127 06:52:30.938701 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8mzp"] Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.052637 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-catalog-content\") pod \"certified-operators-z8mzp\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.052694 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-utilities\") pod \"certified-operators-z8mzp\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.052740 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml45x\" (UniqueName: \"kubernetes.io/projected/74b88900-aeaa-4111-a665-af4559febdd8-kube-api-access-ml45x\") pod \"certified-operators-z8mzp\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.091420 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2snq" event={"ID":"e16325da-a1d2-4dd0-bd2c-f3b0c90db131","Type":"ContainerStarted","Data":"948ef9b42880545bfa57b5c597b2fee50a0fc3c93447232d0758ab86ce82857d"} Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.093630 4796 generic.go:334] "Generic (PLEG): container finished" podID="3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8" containerID="fa74a8c02495f11c78d625998903705a27703223ee6d5fd03e2f29dfa3bbb361" exitCode=0 Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.093657 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2grzv" event={"ID":"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8","Type":"ContainerDied","Data":"fa74a8c02495f11c78d625998903705a27703223ee6d5fd03e2f29dfa3bbb361"} Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.132591 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9qlsp"] Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.133656 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.135582 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.141690 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qlsp"] Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.155678 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml45x\" (UniqueName: \"kubernetes.io/projected/74b88900-aeaa-4111-a665-af4559febdd8-kube-api-access-ml45x\") pod \"certified-operators-z8mzp\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.155775 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-catalog-content\") pod \"certified-operators-z8mzp\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.155809 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-utilities\") pod \"certified-operators-z8mzp\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.156389 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-utilities\") pod \"certified-operators-z8mzp\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.156489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-catalog-content\") pod \"certified-operators-z8mzp\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.172729 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml45x\" (UniqueName: \"kubernetes.io/projected/74b88900-aeaa-4111-a665-af4559febdd8-kube-api-access-ml45x\") pod \"certified-operators-z8mzp\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.245014 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.257263 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb-catalog-content\") pod \"community-operators-9qlsp\" (UID: \"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb\") " pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.257310 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb-utilities\") pod \"community-operators-9qlsp\" (UID: \"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb\") " pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.257329 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dj6v\" (UniqueName: \"kubernetes.io/projected/e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb-kube-api-access-5dj6v\") pod \"community-operators-9qlsp\" (UID: \"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb\") " pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.359182 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb-utilities\") pod \"community-operators-9qlsp\" (UID: \"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb\") " pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.359484 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dj6v\" (UniqueName: \"kubernetes.io/projected/e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb-kube-api-access-5dj6v\") pod \"community-operators-9qlsp\" (UID: \"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb\") " pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.359631 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb-catalog-content\") pod \"community-operators-9qlsp\" (UID: \"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb\") " pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.360110 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb-catalog-content\") pod \"community-operators-9qlsp\" (UID: \"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb\") " pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.360391 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb-utilities\") pod \"community-operators-9qlsp\" (UID: \"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb\") " pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.384710 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dj6v\" (UniqueName: \"kubernetes.io/projected/e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb-kube-api-access-5dj6v\") pod \"community-operators-9qlsp\" (UID: \"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb\") " pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.456505 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.664143 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z8mzp"] Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.806512 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz"] Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.806748 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" podUID="57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" containerName="route-controller-manager" containerID="cri-o://ce66caf1c495b6d1881e34c32c2d713372d40d2a928bc58c2f8be1a1f5e4d6ad" gracePeriod=30 Jan 27 06:52:31 crc kubenswrapper[4796]: I0127 06:52:31.856886 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9qlsp"] Jan 27 06:52:31 crc kubenswrapper[4796]: W0127 06:52:31.880194 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9e051cd_3b4f_4ce4_85fd_9fa61162e0bb.slice/crio-fa90c3a5340033572218522a2e3de7731333e4c05a48c3c7d4d4aa86fa4917ce WatchSource:0}: Error finding container fa90c3a5340033572218522a2e3de7731333e4c05a48c3c7d4d4aa86fa4917ce: Status 404 returned error can't find the container with id fa90c3a5340033572218522a2e3de7731333e4c05a48c3c7d4d4aa86fa4917ce Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.101216 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2grzv" event={"ID":"3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8","Type":"ContainerStarted","Data":"c82aaac986a2b60a8948b5e59aabefb9d7ee24d7be0f5427d4f12449ad299cd1"} Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.106829 4796 generic.go:334] "Generic (PLEG): container finished" podID="e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb" containerID="2a01405ca8bdf9e7192d77d48a2805b5912f554c90f472604c08d0224499b4d9" exitCode=0 Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.107138 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qlsp" event={"ID":"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb","Type":"ContainerDied","Data":"2a01405ca8bdf9e7192d77d48a2805b5912f554c90f472604c08d0224499b4d9"} Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.107221 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qlsp" event={"ID":"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb","Type":"ContainerStarted","Data":"fa90c3a5340033572218522a2e3de7731333e4c05a48c3c7d4d4aa86fa4917ce"} Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.109469 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8mzp" event={"ID":"74b88900-aeaa-4111-a665-af4559febdd8","Type":"ContainerDied","Data":"659bc79fe189fc0a3fe05ab7ba996e6bd30e9094df5f460478c977f4914e9741"} Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.109326 4796 generic.go:334] "Generic (PLEG): container finished" podID="74b88900-aeaa-4111-a665-af4559febdd8" containerID="659bc79fe189fc0a3fe05ab7ba996e6bd30e9094df5f460478c977f4914e9741" exitCode=0 Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.110351 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8mzp" event={"ID":"74b88900-aeaa-4111-a665-af4559febdd8","Type":"ContainerStarted","Data":"68c881b892d164664cca29eaa6e5fb37e68f3ffcb9018726d5bfea67637a2125"} Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.112212 4796 generic.go:334] "Generic (PLEG): container finished" podID="57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" containerID="ce66caf1c495b6d1881e34c32c2d713372d40d2a928bc58c2f8be1a1f5e4d6ad" exitCode=0 Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.112265 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" event={"ID":"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507","Type":"ContainerDied","Data":"ce66caf1c495b6d1881e34c32c2d713372d40d2a928bc58c2f8be1a1f5e4d6ad"} Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.115200 4796 generic.go:334] "Generic (PLEG): container finished" podID="e16325da-a1d2-4dd0-bd2c-f3b0c90db131" containerID="948ef9b42880545bfa57b5c597b2fee50a0fc3c93447232d0758ab86ce82857d" exitCode=0 Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.115251 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2snq" event={"ID":"e16325da-a1d2-4dd0-bd2c-f3b0c90db131","Type":"ContainerDied","Data":"948ef9b42880545bfa57b5c597b2fee50a0fc3c93447232d0758ab86ce82857d"} Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.130098 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2grzv" podStartSLOduration=2.711917092 podStartE2EDuration="4.130075532s" podCreationTimestamp="2026-01-27 06:52:28 +0000 UTC" firstStartedPulling="2026-01-27 06:52:30.086359371 +0000 UTC m=+371.193326698" lastFinishedPulling="2026-01-27 06:52:31.504517801 +0000 UTC m=+372.611485138" observedRunningTime="2026-01-27 06:52:32.120743381 +0000 UTC m=+373.227710708" watchObservedRunningTime="2026-01-27 06:52:32.130075532 +0000 UTC m=+373.237042859" Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.161469 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.270931 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cj87\" (UniqueName: \"kubernetes.io/projected/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-kube-api-access-5cj87\") pod \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.271071 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-config\") pod \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.271112 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-client-ca\") pod \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.271166 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-serving-cert\") pod \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\" (UID: \"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507\") " Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.271795 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-client-ca" (OuterVolumeSpecName: "client-ca") pod "57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" (UID: "57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.271847 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-config" (OuterVolumeSpecName: "config") pod "57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" (UID: "57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.272246 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.272271 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-client-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.276072 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-kube-api-access-5cj87" (OuterVolumeSpecName: "kube-api-access-5cj87") pod "57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" (UID: "57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507"). InnerVolumeSpecName "kube-api-access-5cj87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.276138 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" (UID: "57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.373582 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:32 crc kubenswrapper[4796]: I0127 06:52:32.373624 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cj87\" (UniqueName: \"kubernetes.io/projected/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507-kube-api-access-5cj87\") on node \"crc\" DevicePath \"\"" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.108011 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr"] Jan 27 06:52:33 crc kubenswrapper[4796]: E0127 06:52:33.108651 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" containerName="route-controller-manager" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.108667 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" containerName="route-controller-manager" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.108776 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" containerName="route-controller-manager" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.109175 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.118051 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr"] Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.122757 4796 generic.go:334] "Generic (PLEG): container finished" podID="74b88900-aeaa-4111-a665-af4559febdd8" containerID="100ea1d4cecf5ccb2b51c1b90a71c6b5887908500121ccfde33f7fce669eddf3" exitCode=0 Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.122841 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8mzp" event={"ID":"74b88900-aeaa-4111-a665-af4559febdd8","Type":"ContainerDied","Data":"100ea1d4cecf5ccb2b51c1b90a71c6b5887908500121ccfde33f7fce669eddf3"} Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.124290 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.124201 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz" event={"ID":"57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507","Type":"ContainerDied","Data":"f7ef5c97c532aa572dda9c2805843ec2b35b6dcdaa5d006a75d7384473df792d"} Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.124465 4796 scope.go:117] "RemoveContainer" containerID="ce66caf1c495b6d1881e34c32c2d713372d40d2a928bc58c2f8be1a1f5e4d6ad" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.127476 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j2snq" event={"ID":"e16325da-a1d2-4dd0-bd2c-f3b0c90db131","Type":"ContainerStarted","Data":"af6dccac0208e37684f62ec5444fa7fb897851e8622a67e2ef8c240304b768bb"} Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.129924 4796 generic.go:334] "Generic (PLEG): container finished" podID="e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb" containerID="84ab1ccc38e57267272305e5a96dddf7299c427804ded73409943d4f463b6f98" exitCode=0 Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.130742 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qlsp" event={"ID":"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb","Type":"ContainerDied","Data":"84ab1ccc38e57267272305e5a96dddf7299c427804ded73409943d4f463b6f98"} Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.178174 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j2snq" podStartSLOduration=2.655033007 podStartE2EDuration="5.178150369s" podCreationTimestamp="2026-01-27 06:52:28 +0000 UTC" firstStartedPulling="2026-01-27 06:52:30.08632515 +0000 UTC m=+371.193292477" lastFinishedPulling="2026-01-27 06:52:32.609442522 +0000 UTC m=+373.716409839" observedRunningTime="2026-01-27 06:52:33.17614329 +0000 UTC m=+374.283110617" watchObservedRunningTime="2026-01-27 06:52:33.178150369 +0000 UTC m=+374.285117706" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.236131 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz"] Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.239112 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cdbf6b485-tnfhz"] Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.287032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ea0d0b6-713a-4d73-bbc4-65a06137806b-client-ca\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.287073 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5jtg\" (UniqueName: \"kubernetes.io/projected/6ea0d0b6-713a-4d73-bbc4-65a06137806b-kube-api-access-w5jtg\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.287208 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea0d0b6-713a-4d73-bbc4-65a06137806b-config\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.287254 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea0d0b6-713a-4d73-bbc4-65a06137806b-serving-cert\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.388184 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea0d0b6-713a-4d73-bbc4-65a06137806b-config\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.388251 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea0d0b6-713a-4d73-bbc4-65a06137806b-serving-cert\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.388291 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ea0d0b6-713a-4d73-bbc4-65a06137806b-client-ca\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.388312 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5jtg\" (UniqueName: \"kubernetes.io/projected/6ea0d0b6-713a-4d73-bbc4-65a06137806b-kube-api-access-w5jtg\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.389271 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ea0d0b6-713a-4d73-bbc4-65a06137806b-client-ca\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.389338 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea0d0b6-713a-4d73-bbc4-65a06137806b-config\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.393880 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ea0d0b6-713a-4d73-bbc4-65a06137806b-serving-cert\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.409328 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5jtg\" (UniqueName: \"kubernetes.io/projected/6ea0d0b6-713a-4d73-bbc4-65a06137806b-kube-api-access-w5jtg\") pod \"route-controller-manager-897ffc96d-vgknr\" (UID: \"6ea0d0b6-713a-4d73-bbc4-65a06137806b\") " pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.513058 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.744362 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr"] Jan 27 06:52:33 crc kubenswrapper[4796]: W0127 06:52:33.761201 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ea0d0b6_713a_4d73_bbc4_65a06137806b.slice/crio-8e7817e63196bc6a34d20482442c7fed46c83e1992ef5bc2c3b42a630f3a1291 WatchSource:0}: Error finding container 8e7817e63196bc6a34d20482442c7fed46c83e1992ef5bc2c3b42a630f3a1291: Status 404 returned error can't find the container with id 8e7817e63196bc6a34d20482442c7fed46c83e1992ef5bc2c3b42a630f3a1291 Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.791008 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:52:33 crc kubenswrapper[4796]: I0127 06:52:33.791076 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.136580 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9qlsp" event={"ID":"e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb","Type":"ContainerStarted","Data":"b11d1b99a405f7732d2a46905f538084627c741b1e469bbd9129416f6c4b520c"} Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.139046 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" event={"ID":"6ea0d0b6-713a-4d73-bbc4-65a06137806b","Type":"ContainerStarted","Data":"5157c4cb6609ca1ff53c02748860b43f4e97f3cb431dcce3c6cc3f7e0a6afd55"} Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.139095 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" event={"ID":"6ea0d0b6-713a-4d73-bbc4-65a06137806b","Type":"ContainerStarted","Data":"8e7817e63196bc6a34d20482442c7fed46c83e1992ef5bc2c3b42a630f3a1291"} Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.139292 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.141761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8mzp" event={"ID":"74b88900-aeaa-4111-a665-af4559febdd8","Type":"ContainerStarted","Data":"be2e8cf3bd890a5674fac0f67ca99e7b68b40955b3215a7b7f954e16eb9fa108"} Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.155244 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9qlsp" podStartSLOduration=1.753066263 podStartE2EDuration="3.155225833s" podCreationTimestamp="2026-01-27 06:52:31 +0000 UTC" firstStartedPulling="2026-01-27 06:52:32.108364546 +0000 UTC m=+373.215331873" lastFinishedPulling="2026-01-27 06:52:33.510524116 +0000 UTC m=+374.617491443" observedRunningTime="2026-01-27 06:52:34.152050587 +0000 UTC m=+375.259017914" watchObservedRunningTime="2026-01-27 06:52:34.155225833 +0000 UTC m=+375.262193160" Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.178258 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z8mzp" podStartSLOduration=2.686309833 podStartE2EDuration="4.178232941s" podCreationTimestamp="2026-01-27 06:52:30 +0000 UTC" firstStartedPulling="2026-01-27 06:52:32.111002119 +0000 UTC m=+373.217969446" lastFinishedPulling="2026-01-27 06:52:33.602925217 +0000 UTC m=+374.709892554" observedRunningTime="2026-01-27 06:52:34.175503146 +0000 UTC m=+375.282470473" watchObservedRunningTime="2026-01-27 06:52:34.178232941 +0000 UTC m=+375.285200278" Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.200574 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" podStartSLOduration=3.200556423 podStartE2EDuration="3.200556423s" podCreationTimestamp="2026-01-27 06:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:52:34.196949437 +0000 UTC m=+375.303916764" watchObservedRunningTime="2026-01-27 06:52:34.200556423 +0000 UTC m=+375.307523760" Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.393978 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-897ffc96d-vgknr" Jan 27 06:52:34 crc kubenswrapper[4796]: I0127 06:52:34.774982 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507" path="/var/lib/kubelet/pods/57c6fb6d-3f1c-4ea5-aa1b-8fa8f3674507/volumes" Jan 27 06:52:38 crc kubenswrapper[4796]: I0127 06:52:38.868559 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:38 crc kubenswrapper[4796]: I0127 06:52:38.869095 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:38 crc kubenswrapper[4796]: I0127 06:52:38.913338 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:39 crc kubenswrapper[4796]: I0127 06:52:39.064829 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:39 crc kubenswrapper[4796]: I0127 06:52:39.064882 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:39 crc kubenswrapper[4796]: I0127 06:52:39.129404 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:39 crc kubenswrapper[4796]: I0127 06:52:39.226770 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2grzv" Jan 27 06:52:39 crc kubenswrapper[4796]: I0127 06:52:39.227183 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j2snq" Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.850196 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4brwh"] Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.851403 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.912301 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4brwh"] Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.980132 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4036ad2-e582-441f-ab4b-b80c138632fe-trusted-ca\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.980219 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln285\" (UniqueName: \"kubernetes.io/projected/d4036ad2-e582-441f-ab4b-b80c138632fe-kube-api-access-ln285\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.980264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.980298 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4036ad2-e582-441f-ab4b-b80c138632fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.980320 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4036ad2-e582-441f-ab4b-b80c138632fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.980346 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4036ad2-e582-441f-ab4b-b80c138632fe-bound-sa-token\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.980437 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4036ad2-e582-441f-ab4b-b80c138632fe-registry-tls\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:40 crc kubenswrapper[4796]: I0127 06:52:40.980631 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4036ad2-e582-441f-ab4b-b80c138632fe-registry-certificates\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.003437 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.081626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln285\" (UniqueName: \"kubernetes.io/projected/d4036ad2-e582-441f-ab4b-b80c138632fe-kube-api-access-ln285\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.081698 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4036ad2-e582-441f-ab4b-b80c138632fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.081756 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4036ad2-e582-441f-ab4b-b80c138632fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.081794 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4036ad2-e582-441f-ab4b-b80c138632fe-bound-sa-token\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.081833 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4036ad2-e582-441f-ab4b-b80c138632fe-registry-tls\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.081860 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4036ad2-e582-441f-ab4b-b80c138632fe-registry-certificates\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.081905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4036ad2-e582-441f-ab4b-b80c138632fe-trusted-ca\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.083437 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4036ad2-e582-441f-ab4b-b80c138632fe-trusted-ca\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.084009 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4036ad2-e582-441f-ab4b-b80c138632fe-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.084809 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4036ad2-e582-441f-ab4b-b80c138632fe-registry-certificates\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.090160 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4036ad2-e582-441f-ab4b-b80c138632fe-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.090270 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4036ad2-e582-441f-ab4b-b80c138632fe-registry-tls\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.098576 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln285\" (UniqueName: \"kubernetes.io/projected/d4036ad2-e582-441f-ab4b-b80c138632fe-kube-api-access-ln285\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.099327 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4036ad2-e582-441f-ab4b-b80c138632fe-bound-sa-token\") pod \"image-registry-66df7c8f76-4brwh\" (UID: \"d4036ad2-e582-441f-ab4b-b80c138632fe\") " pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.171046 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.245709 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.245752 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.305587 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.422261 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4brwh"] Jan 27 06:52:41 crc kubenswrapper[4796]: W0127 06:52:41.438054 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4036ad2_e582_441f_ab4b_b80c138632fe.slice/crio-065b7c46e57e51a4b0ec7d19c37a7e48d7e0037fec501b7a143755e4517c7fa8 WatchSource:0}: Error finding container 065b7c46e57e51a4b0ec7d19c37a7e48d7e0037fec501b7a143755e4517c7fa8: Status 404 returned error can't find the container with id 065b7c46e57e51a4b0ec7d19c37a7e48d7e0037fec501b7a143755e4517c7fa8 Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.458963 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.459041 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:41 crc kubenswrapper[4796]: I0127 06:52:41.504272 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:42 crc kubenswrapper[4796]: I0127 06:52:42.184210 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" event={"ID":"d4036ad2-e582-441f-ab4b-b80c138632fe","Type":"ContainerStarted","Data":"fd3f0a77456a8083b0650dd58c5c83942e3cf717dbe9de70481761526a8a902b"} Jan 27 06:52:42 crc kubenswrapper[4796]: I0127 06:52:42.184453 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" event={"ID":"d4036ad2-e582-441f-ab4b-b80c138632fe","Type":"ContainerStarted","Data":"065b7c46e57e51a4b0ec7d19c37a7e48d7e0037fec501b7a143755e4517c7fa8"} Jan 27 06:52:42 crc kubenswrapper[4796]: I0127 06:52:42.184887 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:52:42 crc kubenswrapper[4796]: I0127 06:52:42.238932 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 06:52:42 crc kubenswrapper[4796]: I0127 06:52:42.251046 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9qlsp" Jan 27 06:52:42 crc kubenswrapper[4796]: I0127 06:52:42.257768 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" podStartSLOduration=2.257759111 podStartE2EDuration="2.257759111s" podCreationTimestamp="2026-01-27 06:52:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:52:42.204711267 +0000 UTC m=+383.311678594" watchObservedRunningTime="2026-01-27 06:52:42.257759111 +0000 UTC m=+383.364726438" Jan 27 06:53:01 crc kubenswrapper[4796]: I0127 06:53:01.184600 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4brwh" Jan 27 06:53:01 crc kubenswrapper[4796]: I0127 06:53:01.267343 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wm5zr"] Jan 27 06:53:03 crc kubenswrapper[4796]: I0127 06:53:03.788757 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:53:03 crc kubenswrapper[4796]: I0127 06:53:03.788843 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.331081 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" podUID="72371683-98ec-4d1a-a1cf-f9d2b072c3d7" containerName="registry" containerID="cri-o://b8483970e1f5fddb5da373d8843911b3817b940030d8cf155e47ee3925d705b8" gracePeriod=30 Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.468773 4796 generic.go:334] "Generic (PLEG): container finished" podID="72371683-98ec-4d1a-a1cf-f9d2b072c3d7" containerID="b8483970e1f5fddb5da373d8843911b3817b940030d8cf155e47ee3925d705b8" exitCode=0 Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.468854 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" event={"ID":"72371683-98ec-4d1a-a1cf-f9d2b072c3d7","Type":"ContainerDied","Data":"b8483970e1f5fddb5da373d8843911b3817b940030d8cf155e47ee3925d705b8"} Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.740746 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.906966 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-trusted-ca\") pod \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.907285 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-tls\") pod \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.907391 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-certificates\") pod \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.907520 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6njl\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-kube-api-access-r6njl\") pod \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.907672 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-ca-trust-extracted\") pod \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.907834 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-installation-pull-secrets\") pod \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.908033 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-bound-sa-token\") pod \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.908315 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\" (UID: \"72371683-98ec-4d1a-a1cf-f9d2b072c3d7\") " Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.909123 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "72371683-98ec-4d1a-a1cf-f9d2b072c3d7" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.909730 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "72371683-98ec-4d1a-a1cf-f9d2b072c3d7" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.920768 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "72371683-98ec-4d1a-a1cf-f9d2b072c3d7" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.920769 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "72371683-98ec-4d1a-a1cf-f9d2b072c3d7" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.921238 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "72371683-98ec-4d1a-a1cf-f9d2b072c3d7" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.921549 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-kube-api-access-r6njl" (OuterVolumeSpecName: "kube-api-access-r6njl") pod "72371683-98ec-4d1a-a1cf-f9d2b072c3d7" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7"). InnerVolumeSpecName "kube-api-access-r6njl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.922222 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "72371683-98ec-4d1a-a1cf-f9d2b072c3d7" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 27 06:53:26 crc kubenswrapper[4796]: I0127 06:53:26.924882 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "72371683-98ec-4d1a-a1cf-f9d2b072c3d7" (UID: "72371683-98ec-4d1a-a1cf-f9d2b072c3d7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.010324 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.010392 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.010418 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.010442 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.010470 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6njl\" (UniqueName: \"kubernetes.io/projected/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-kube-api-access-r6njl\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.010494 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.010517 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/72371683-98ec-4d1a-a1cf-f9d2b072c3d7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.476383 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" event={"ID":"72371683-98ec-4d1a-a1cf-f9d2b072c3d7","Type":"ContainerDied","Data":"59c5d48238599bc8a275bbf7a5dcc01a2f0bac0d2d976b3487fdc4aa4b49c496"} Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.476434 4796 scope.go:117] "RemoveContainer" containerID="b8483970e1f5fddb5da373d8843911b3817b940030d8cf155e47ee3925d705b8" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.476503 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-wm5zr" Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.526489 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wm5zr"] Jan 27 06:53:27 crc kubenswrapper[4796]: I0127 06:53:27.531340 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-wm5zr"] Jan 27 06:53:28 crc kubenswrapper[4796]: I0127 06:53:28.757407 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72371683-98ec-4d1a-a1cf-f9d2b072c3d7" path="/var/lib/kubelet/pods/72371683-98ec-4d1a-a1cf-f9d2b072c3d7/volumes" Jan 27 06:53:33 crc kubenswrapper[4796]: I0127 06:53:33.788963 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:53:33 crc kubenswrapper[4796]: I0127 06:53:33.789440 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:53:33 crc kubenswrapper[4796]: I0127 06:53:33.789509 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:53:33 crc kubenswrapper[4796]: I0127 06:53:33.790341 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0ab50194b0e5182dbd095e094c1d6598ffec9adf9632e07062134c1284a52097"} pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 06:53:33 crc kubenswrapper[4796]: I0127 06:53:33.790442 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" containerID="cri-o://0ab50194b0e5182dbd095e094c1d6598ffec9adf9632e07062134c1284a52097" gracePeriod=600 Jan 27 06:53:34 crc kubenswrapper[4796]: I0127 06:53:34.531953 4796 generic.go:334] "Generic (PLEG): container finished" podID="84d7512b-555d-440a-b817-deb8ba12f61d" containerID="0ab50194b0e5182dbd095e094c1d6598ffec9adf9632e07062134c1284a52097" exitCode=0 Jan 27 06:53:34 crc kubenswrapper[4796]: I0127 06:53:34.532067 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerDied","Data":"0ab50194b0e5182dbd095e094c1d6598ffec9adf9632e07062134c1284a52097"} Jan 27 06:53:34 crc kubenswrapper[4796]: I0127 06:53:34.532398 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"dfa6b1db554a19ec28fc2fce13b6e36d08d4e2a69c60abcaaae99832b0c71be3"} Jan 27 06:53:34 crc kubenswrapper[4796]: I0127 06:53:34.532431 4796 scope.go:117] "RemoveContainer" containerID="e5de768022ec93faa0dc12c79330c073c9260e6d3f17ee6e77aa2325c38e01d2" Jan 27 06:56:03 crc kubenswrapper[4796]: I0127 06:56:03.789009 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:56:03 crc kubenswrapper[4796]: I0127 06:56:03.790796 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:56:33 crc kubenswrapper[4796]: I0127 06:56:33.789069 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:56:33 crc kubenswrapper[4796]: I0127 06:56:33.789872 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:57:03 crc kubenswrapper[4796]: I0127 06:57:03.788869 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:57:03 crc kubenswrapper[4796]: I0127 06:57:03.789518 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:57:03 crc kubenswrapper[4796]: I0127 06:57:03.789631 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 06:57:03 crc kubenswrapper[4796]: I0127 06:57:03.791252 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfa6b1db554a19ec28fc2fce13b6e36d08d4e2a69c60abcaaae99832b0c71be3"} pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 06:57:03 crc kubenswrapper[4796]: I0127 06:57:03.791377 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" containerID="cri-o://dfa6b1db554a19ec28fc2fce13b6e36d08d4e2a69c60abcaaae99832b0c71be3" gracePeriod=600 Jan 27 06:57:03 crc kubenswrapper[4796]: I0127 06:57:03.939395 4796 generic.go:334] "Generic (PLEG): container finished" podID="84d7512b-555d-440a-b817-deb8ba12f61d" containerID="dfa6b1db554a19ec28fc2fce13b6e36d08d4e2a69c60abcaaae99832b0c71be3" exitCode=0 Jan 27 06:57:03 crc kubenswrapper[4796]: I0127 06:57:03.939434 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerDied","Data":"dfa6b1db554a19ec28fc2fce13b6e36d08d4e2a69c60abcaaae99832b0c71be3"} Jan 27 06:57:03 crc kubenswrapper[4796]: I0127 06:57:03.939469 4796 scope.go:117] "RemoveContainer" containerID="0ab50194b0e5182dbd095e094c1d6598ffec9adf9632e07062134c1284a52097" Jan 27 06:57:04 crc kubenswrapper[4796]: I0127 06:57:04.952190 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"2c94d4638721f045282b7bf6b0d1a7f76dc81b09a6581078ab42936dd6c8b456"} Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.361823 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp"] Jan 27 06:58:03 crc kubenswrapper[4796]: E0127 06:58:03.362700 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72371683-98ec-4d1a-a1cf-f9d2b072c3d7" containerName="registry" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.362717 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="72371683-98ec-4d1a-a1cf-f9d2b072c3d7" containerName="registry" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.362838 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="72371683-98ec-4d1a-a1cf-f9d2b072c3d7" containerName="registry" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.363269 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.365462 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.365714 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-v452s" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.366770 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.373590 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp"] Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.418267 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-c464t"] Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.419098 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-c464t" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.428203 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-nkd85" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.434736 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qwbkz"] Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.435777 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.437515 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-gszdd" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.451817 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-c464t"] Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.461853 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qwbkz"] Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.542481 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkvz9\" (UniqueName: \"kubernetes.io/projected/42a9a105-7972-4287-8bb6-203e3dfa1339-kube-api-access-gkvz9\") pod \"cert-manager-webhook-687f57d79b-qwbkz\" (UID: \"42a9a105-7972-4287-8bb6-203e3dfa1339\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.542766 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkknv\" (UniqueName: \"kubernetes.io/projected/6a87b8e8-dd35-4bb9-88cd-b41e445d785d-kube-api-access-mkknv\") pod \"cert-manager-cainjector-cf98fcc89-nrqwp\" (UID: \"6a87b8e8-dd35-4bb9-88cd-b41e445d785d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.542971 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pm9q\" (UniqueName: \"kubernetes.io/projected/0d54c657-ac07-4f66-afe0-8f6109670a43-kube-api-access-7pm9q\") pod \"cert-manager-858654f9db-c464t\" (UID: \"0d54c657-ac07-4f66-afe0-8f6109670a43\") " pod="cert-manager/cert-manager-858654f9db-c464t" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.644501 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pm9q\" (UniqueName: \"kubernetes.io/projected/0d54c657-ac07-4f66-afe0-8f6109670a43-kube-api-access-7pm9q\") pod \"cert-manager-858654f9db-c464t\" (UID: \"0d54c657-ac07-4f66-afe0-8f6109670a43\") " pod="cert-manager/cert-manager-858654f9db-c464t" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.644598 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkvz9\" (UniqueName: \"kubernetes.io/projected/42a9a105-7972-4287-8bb6-203e3dfa1339-kube-api-access-gkvz9\") pod \"cert-manager-webhook-687f57d79b-qwbkz\" (UID: \"42a9a105-7972-4287-8bb6-203e3dfa1339\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.644697 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkknv\" (UniqueName: \"kubernetes.io/projected/6a87b8e8-dd35-4bb9-88cd-b41e445d785d-kube-api-access-mkknv\") pod \"cert-manager-cainjector-cf98fcc89-nrqwp\" (UID: \"6a87b8e8-dd35-4bb9-88cd-b41e445d785d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.667634 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pm9q\" (UniqueName: \"kubernetes.io/projected/0d54c657-ac07-4f66-afe0-8f6109670a43-kube-api-access-7pm9q\") pod \"cert-manager-858654f9db-c464t\" (UID: \"0d54c657-ac07-4f66-afe0-8f6109670a43\") " pod="cert-manager/cert-manager-858654f9db-c464t" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.667691 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkknv\" (UniqueName: \"kubernetes.io/projected/6a87b8e8-dd35-4bb9-88cd-b41e445d785d-kube-api-access-mkknv\") pod \"cert-manager-cainjector-cf98fcc89-nrqwp\" (UID: \"6a87b8e8-dd35-4bb9-88cd-b41e445d785d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.669089 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkvz9\" (UniqueName: \"kubernetes.io/projected/42a9a105-7972-4287-8bb6-203e3dfa1339-kube-api-access-gkvz9\") pod \"cert-manager-webhook-687f57d79b-qwbkz\" (UID: \"42a9a105-7972-4287-8bb6-203e3dfa1339\") " pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.712242 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.737694 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-c464t" Jan 27 06:58:03 crc kubenswrapper[4796]: I0127 06:58:03.751678 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" Jan 27 06:58:04 crc kubenswrapper[4796]: I0127 06:58:04.152130 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp"] Jan 27 06:58:04 crc kubenswrapper[4796]: I0127 06:58:04.159996 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 06:58:04 crc kubenswrapper[4796]: I0127 06:58:04.199204 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-qwbkz"] Jan 27 06:58:04 crc kubenswrapper[4796]: W0127 06:58:04.201998 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42a9a105_7972_4287_8bb6_203e3dfa1339.slice/crio-c85e4e70e5d0332fa253d9e9d7ed3d3cea8a63d8ee52c7727300ee51526acb48 WatchSource:0}: Error finding container c85e4e70e5d0332fa253d9e9d7ed3d3cea8a63d8ee52c7727300ee51526acb48: Status 404 returned error can't find the container with id c85e4e70e5d0332fa253d9e9d7ed3d3cea8a63d8ee52c7727300ee51526acb48 Jan 27 06:58:04 crc kubenswrapper[4796]: W0127 06:58:04.203599 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d54c657_ac07_4f66_afe0_8f6109670a43.slice/crio-9f1ddbf7b0511ca2339eb7484cb57290bd3aa96241b05e3a44d2a8fc491931df WatchSource:0}: Error finding container 9f1ddbf7b0511ca2339eb7484cb57290bd3aa96241b05e3a44d2a8fc491931df: Status 404 returned error can't find the container with id 9f1ddbf7b0511ca2339eb7484cb57290bd3aa96241b05e3a44d2a8fc491931df Jan 27 06:58:04 crc kubenswrapper[4796]: I0127 06:58:04.204378 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-c464t"] Jan 27 06:58:04 crc kubenswrapper[4796]: I0127 06:58:04.326755 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp" event={"ID":"6a87b8e8-dd35-4bb9-88cd-b41e445d785d","Type":"ContainerStarted","Data":"237199d7730193e79d0f3b06ec7df4b8a20f167061776f2f505bb08b068e6365"} Jan 27 06:58:04 crc kubenswrapper[4796]: I0127 06:58:04.328327 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-c464t" event={"ID":"0d54c657-ac07-4f66-afe0-8f6109670a43","Type":"ContainerStarted","Data":"9f1ddbf7b0511ca2339eb7484cb57290bd3aa96241b05e3a44d2a8fc491931df"} Jan 27 06:58:04 crc kubenswrapper[4796]: I0127 06:58:04.329294 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" event={"ID":"42a9a105-7972-4287-8bb6-203e3dfa1339","Type":"ContainerStarted","Data":"c85e4e70e5d0332fa253d9e9d7ed3d3cea8a63d8ee52c7727300ee51526acb48"} Jan 27 06:58:09 crc kubenswrapper[4796]: I0127 06:58:09.361441 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp" event={"ID":"6a87b8e8-dd35-4bb9-88cd-b41e445d785d","Type":"ContainerStarted","Data":"b0fe9bd7c772accb455310bb8c989efd7b667f071db5236f3376689fc7126abf"} Jan 27 06:58:09 crc kubenswrapper[4796]: I0127 06:58:09.366633 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-c464t" event={"ID":"0d54c657-ac07-4f66-afe0-8f6109670a43","Type":"ContainerStarted","Data":"977454e42447de56fd0b62430187bc8d12d4100f5dd5bff3cefd8b776013de6e"} Jan 27 06:58:09 crc kubenswrapper[4796]: I0127 06:58:09.369365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" event={"ID":"42a9a105-7972-4287-8bb6-203e3dfa1339","Type":"ContainerStarted","Data":"7c493ebd392b7498cc61ddbd440c841db698b6324d64cd64e5cad992b538618e"} Jan 27 06:58:09 crc kubenswrapper[4796]: I0127 06:58:09.369728 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" Jan 27 06:58:09 crc kubenswrapper[4796]: I0127 06:58:09.386678 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-nrqwp" podStartSLOduration=2.286888418 podStartE2EDuration="6.386661794s" podCreationTimestamp="2026-01-27 06:58:03 +0000 UTC" firstStartedPulling="2026-01-27 06:58:04.159750787 +0000 UTC m=+705.266718114" lastFinishedPulling="2026-01-27 06:58:08.259524143 +0000 UTC m=+709.366491490" observedRunningTime="2026-01-27 06:58:09.382252334 +0000 UTC m=+710.489219661" watchObservedRunningTime="2026-01-27 06:58:09.386661794 +0000 UTC m=+710.493629121" Jan 27 06:58:09 crc kubenswrapper[4796]: I0127 06:58:09.404047 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-c464t" podStartSLOduration=2.416174633 podStartE2EDuration="6.404023785s" podCreationTimestamp="2026-01-27 06:58:03 +0000 UTC" firstStartedPulling="2026-01-27 06:58:04.205785711 +0000 UTC m=+705.312753038" lastFinishedPulling="2026-01-27 06:58:08.193634863 +0000 UTC m=+709.300602190" observedRunningTime="2026-01-27 06:58:09.400253531 +0000 UTC m=+710.507220868" watchObservedRunningTime="2026-01-27 06:58:09.404023785 +0000 UTC m=+710.510991122" Jan 27 06:58:09 crc kubenswrapper[4796]: I0127 06:58:09.427716 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" podStartSLOduration=2.448079288 podStartE2EDuration="6.427694454s" podCreationTimestamp="2026-01-27 06:58:03 +0000 UTC" firstStartedPulling="2026-01-27 06:58:04.205105735 +0000 UTC m=+705.312073062" lastFinishedPulling="2026-01-27 06:58:08.184720901 +0000 UTC m=+709.291688228" observedRunningTime="2026-01-27 06:58:09.424026113 +0000 UTC m=+710.530993440" watchObservedRunningTime="2026-01-27 06:58:09.427694454 +0000 UTC m=+710.534661801" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.042077 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xqmc4"] Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.042654 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovn-controller" containerID="cri-o://bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46" gracePeriod=30 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.042861 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="northd" containerID="cri-o://6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed" gracePeriod=30 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.042789 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="sbdb" containerID="cri-o://7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b" gracePeriod=30 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.042940 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134" gracePeriod=30 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.042997 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kube-rbac-proxy-node" containerID="cri-o://8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5" gracePeriod=30 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.043081 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovn-acl-logging" containerID="cri-o://c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c" gracePeriod=30 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.043279 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="nbdb" containerID="cri-o://37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6" gracePeriod=30 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.119336 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" containerID="cri-o://b0114a658b7e6fb8e035558631620e865e26a2e4358c96ea56112e7debad7b53" gracePeriod=30 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.399768 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/2.log" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.400661 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/1.log" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.400704 4796 generic.go:334] "Generic (PLEG): container finished" podID="b3555bc2-e335-4479-8b6f-8b5970b27a25" containerID="c413a9ee0ad373bb16813255f639547353f408c44b80cb704801bfad74788d62" exitCode=2 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.400758 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46ql2" event={"ID":"b3555bc2-e335-4479-8b6f-8b5970b27a25","Type":"ContainerDied","Data":"c413a9ee0ad373bb16813255f639547353f408c44b80cb704801bfad74788d62"} Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.400931 4796 scope.go:117] "RemoveContainer" containerID="f7a41342bb50022b27d6a19ad1eef8be259001106eed45870bab131cd92fc35c" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.401680 4796 scope.go:117] "RemoveContainer" containerID="c413a9ee0ad373bb16813255f639547353f408c44b80cb704801bfad74788d62" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.401958 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-46ql2_openshift-multus(b3555bc2-e335-4479-8b6f-8b5970b27a25)\"" pod="openshift-multus/multus-46ql2" podUID="b3555bc2-e335-4479-8b6f-8b5970b27a25" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.404967 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/3.log" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.409584 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovn-acl-logging/0.log" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.410458 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovn-controller/0.log" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411104 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="b0114a658b7e6fb8e035558631620e865e26a2e4358c96ea56112e7debad7b53" exitCode=0 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411147 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b" exitCode=0 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411164 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6" exitCode=0 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411180 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed" exitCode=0 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411194 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134" exitCode=0 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411206 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5" exitCode=0 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411229 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c" exitCode=143 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411197 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"b0114a658b7e6fb8e035558631620e865e26a2e4358c96ea56112e7debad7b53"} Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411289 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b"} Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411312 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6"} Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411331 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed"} Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411348 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134"} Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5"} Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411381 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c"} Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411242 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerID="bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46" exitCode=143 Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.411397 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46"} Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.412571 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovnkube-controller/3.log" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.415223 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovn-acl-logging/0.log" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.415736 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovn-controller/0.log" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.417187 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.429641 4796 scope.go:117] "RemoveContainer" containerID="24ee637b8b6de85e4202e420fb9eb9c78f0c496062f69cb82ed564bf9f524817" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.484294 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rgttg"] Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.484879 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kubecfg-setup" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.484920 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kubecfg-setup" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.484942 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.484955 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.484972 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="sbdb" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.484990 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="sbdb" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.485014 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485031 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.485064 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485081 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.485103 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485121 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.485147 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovn-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485164 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovn-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.485189 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="nbdb" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485205 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="nbdb" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.485230 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovn-acl-logging" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485247 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovn-acl-logging" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.485271 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kube-rbac-proxy-node" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485288 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kube-rbac-proxy-node" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.485364 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485452 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.485566 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="northd" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485581 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="northd" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485806 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485826 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485847 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="sbdb" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485863 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485880 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="northd" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485897 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovn-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485915 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="nbdb" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485934 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kube-rbac-proxy-node" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485952 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485972 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="kube-rbac-proxy-ovn-metrics" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.485990 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovn-acl-logging" Jan 27 06:58:12 crc kubenswrapper[4796]: E0127 06:58:12.486184 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.486206 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.486491 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" containerName="ovnkube-controller" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.488199 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.564883 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-slash\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.564980 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-etc-openvswitch\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565036 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-env-overrides\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565030 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-slash" (OuterVolumeSpecName: "host-slash") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565079 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-kubelet\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565091 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565111 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1fb58d6-d9a4-4095-be46-a544216963f7-ovn-node-metrics-cert\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-netns\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565281 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565348 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565313 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-openvswitch\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565333 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565434 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-systemd-units\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565489 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565517 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565527 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-config\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565565 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565587 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-ovn\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565631 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-systemd\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565658 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565665 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-netd\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565701 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-var-lib-openvswitch\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565743 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-script-lib\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565770 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-node-log\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565819 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fskkf\" (UniqueName: \"kubernetes.io/projected/a1fb58d6-d9a4-4095-be46-a544216963f7-kube-api-access-fskkf\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565822 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565815 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565844 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-node-log" (OuterVolumeSpecName: "node-log") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565875 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-bin\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565910 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565925 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566003 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-log-socket" (OuterVolumeSpecName: "log-socket") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565971 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-log-socket\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566090 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-ovn-kubernetes\") pod \"a1fb58d6-d9a4-4095-be46-a544216963f7\" (UID: \"a1fb58d6-d9a4-4095-be46-a544216963f7\") " Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566194 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566328 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.565630 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566884 4796 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566918 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566937 4796 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566955 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566973 4796 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.566990 4796 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567010 4796 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567030 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567048 4796 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567066 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567085 4796 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567102 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a1fb58d6-d9a4-4095-be46-a544216963f7-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567118 4796 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-node-log\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567134 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567150 4796 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-log-socket\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567167 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.567187 4796 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-host-slash\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.572003 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fb58d6-d9a4-4095-be46-a544216963f7-kube-api-access-fskkf" (OuterVolumeSpecName: "kube-api-access-fskkf") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "kube-api-access-fskkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.572774 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fb58d6-d9a4-4095-be46-a544216963f7-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.580163 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a1fb58d6-d9a4-4095-be46-a544216963f7" (UID: "a1fb58d6-d9a4-4095-be46-a544216963f7"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.667939 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-run-openvswitch\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668090 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/649c8573-78e8-48d4-9ac7-9744e3b0338a-env-overrides\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668140 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-systemd-units\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668163 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-run-ovn\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668190 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/649c8573-78e8-48d4-9ac7-9744e3b0338a-ovn-node-metrics-cert\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668233 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-cni-netd\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668281 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/649c8573-78e8-48d4-9ac7-9744e3b0338a-ovnkube-script-lib\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-log-socket\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668343 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-var-lib-openvswitch\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668381 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/649c8573-78e8-48d4-9ac7-9744e3b0338a-ovnkube-config\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668439 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-run-netns\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668480 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-cni-bin\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668521 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668560 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-etc-openvswitch\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668591 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-node-log\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-run-systemd\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668690 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-kubelet\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668717 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668758 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhgr\" (UniqueName: \"kubernetes.io/projected/649c8573-78e8-48d4-9ac7-9744e3b0338a-kube-api-access-zzhgr\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668791 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-slash\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668878 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a1fb58d6-d9a4-4095-be46-a544216963f7-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668905 4796 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a1fb58d6-d9a4-4095-be46-a544216963f7-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.668918 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fskkf\" (UniqueName: \"kubernetes.io/projected/a1fb58d6-d9a4-4095-be46-a544216963f7-kube-api-access-fskkf\") on node \"crc\" DevicePath \"\"" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-log-socket\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770394 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-var-lib-openvswitch\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770436 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/649c8573-78e8-48d4-9ac7-9744e3b0338a-ovnkube-config\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770475 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-run-netns\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-cni-bin\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770576 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770607 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-etc-openvswitch\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770641 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-node-log\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770686 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-run-systemd\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770719 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-kubelet\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770754 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770789 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzhgr\" (UniqueName: \"kubernetes.io/projected/649c8573-78e8-48d4-9ac7-9744e3b0338a-kube-api-access-zzhgr\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-slash\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770881 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-run-openvswitch\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770929 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/649c8573-78e8-48d4-9ac7-9744e3b0338a-env-overrides\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770959 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-systemd-units\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.770985 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-run-ovn\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.771015 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/649c8573-78e8-48d4-9ac7-9744e3b0338a-ovn-node-metrics-cert\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.771057 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-cni-netd\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.771089 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/649c8573-78e8-48d4-9ac7-9744e3b0338a-ovnkube-script-lib\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.771493 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-kubelet\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.771603 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-log-socket\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.771644 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-var-lib-openvswitch\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.772702 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/649c8573-78e8-48d4-9ac7-9744e3b0338a-ovnkube-script-lib\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.772799 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.772862 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/649c8573-78e8-48d4-9ac7-9744e3b0338a-ovnkube-config\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.772909 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-run-netns\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.772942 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-cni-bin\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.772974 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-run-ovn-kubernetes\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.773004 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-etc-openvswitch\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.773032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-node-log\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.773063 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-run-systemd\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.773097 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-systemd-units\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.773132 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-slash\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.773163 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-run-openvswitch\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.773227 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-run-ovn\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.773568 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/649c8573-78e8-48d4-9ac7-9744e3b0338a-env-overrides\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.773620 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/649c8573-78e8-48d4-9ac7-9744e3b0338a-host-cni-netd\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.778003 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/649c8573-78e8-48d4-9ac7-9744e3b0338a-ovn-node-metrics-cert\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.794357 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzhgr\" (UniqueName: \"kubernetes.io/projected/649c8573-78e8-48d4-9ac7-9744e3b0338a-kube-api-access-zzhgr\") pod \"ovnkube-node-rgttg\" (UID: \"649c8573-78e8-48d4-9ac7-9744e3b0338a\") " pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: I0127 06:58:12.801821 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:12 crc kubenswrapper[4796]: W0127 06:58:12.828611 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod649c8573_78e8_48d4_9ac7_9744e3b0338a.slice/crio-2bfdb5664463fcd4052c66b63ddf22dd63ced5d00a3b9af05f0cb0fc8923bef5 WatchSource:0}: Error finding container 2bfdb5664463fcd4052c66b63ddf22dd63ced5d00a3b9af05f0cb0fc8923bef5: Status 404 returned error can't find the container with id 2bfdb5664463fcd4052c66b63ddf22dd63ced5d00a3b9af05f0cb0fc8923bef5 Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.422017 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/2.log" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.426315 4796 generic.go:334] "Generic (PLEG): container finished" podID="649c8573-78e8-48d4-9ac7-9744e3b0338a" containerID="9dc0d6f5a6da4a49619f873b59fce6c96048f3b3f20b804bcfc1c2631c6ec156" exitCode=0 Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.426428 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerDied","Data":"9dc0d6f5a6da4a49619f873b59fce6c96048f3b3f20b804bcfc1c2631c6ec156"} Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.426497 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerStarted","Data":"2bfdb5664463fcd4052c66b63ddf22dd63ced5d00a3b9af05f0cb0fc8923bef5"} Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.437318 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovn-acl-logging/0.log" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.438306 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xqmc4_a1fb58d6-d9a4-4095-be46-a544216963f7/ovn-controller/0.log" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.439097 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" event={"ID":"a1fb58d6-d9a4-4095-be46-a544216963f7","Type":"ContainerDied","Data":"860bd593a4ca13f292a4b7fe7cf8790d6b688898c3d267ad92164a7c15fff046"} Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.439207 4796 scope.go:117] "RemoveContainer" containerID="b0114a658b7e6fb8e035558631620e865e26a2e4358c96ea56112e7debad7b53" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.439213 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xqmc4" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.473073 4796 scope.go:117] "RemoveContainer" containerID="7353129cb709af2186d760a00dbef3ae91dc7feeb6dab2149754accb95fae36b" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.496433 4796 scope.go:117] "RemoveContainer" containerID="37bbcf36779d5d8ff4b06b0dba628a581f534c90af5774321a51bb6360c88db6" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.530854 4796 scope.go:117] "RemoveContainer" containerID="6e09fbd802f33aa6f2ebf9135fea9a98a1e8c0bb14440043fbada1b528b505ed" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.555915 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xqmc4"] Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.558286 4796 scope.go:117] "RemoveContainer" containerID="88151a9fd7ac9365f1563c367588d8093a16206cfebfc1703f8d9cfceaee5134" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.564769 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xqmc4"] Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.587773 4796 scope.go:117] "RemoveContainer" containerID="8fa4ef31b3123acd6bde6c015b604904e010d9996f7ba318fa1a7ae674ea7cf5" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.620716 4796 scope.go:117] "RemoveContainer" containerID="c4e75d49621819f00db37b6f4471e5188a3eb781549bbae1422b6c2216a3366c" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.636969 4796 scope.go:117] "RemoveContainer" containerID="bafac0d6ec40525fc7e61773ebfc410b7f7f42535491b72e0301ce93bd1a6a46" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.651276 4796 scope.go:117] "RemoveContainer" containerID="b50c503924f41773de51c715b03efcc8708e3c9e69ea615182fb8f9028d9ed4d" Jan 27 06:58:13 crc kubenswrapper[4796]: I0127 06:58:13.755034 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-qwbkz" Jan 27 06:58:14 crc kubenswrapper[4796]: I0127 06:58:14.448794 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerStarted","Data":"a15daa785ba1a01575b2be11c18535d0f2c3d996389448dae038c190422b514d"} Jan 27 06:58:14 crc kubenswrapper[4796]: I0127 06:58:14.449268 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerStarted","Data":"6eddf389598d6522ab5019a0423bfe01dda41268179bc47176579f05686f9f5c"} Jan 27 06:58:14 crc kubenswrapper[4796]: I0127 06:58:14.449296 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerStarted","Data":"2431ecc942a9d85c2af770671b1725f95a9c25bc2f76b8c00bcdcddd061c607f"} Jan 27 06:58:14 crc kubenswrapper[4796]: I0127 06:58:14.449314 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerStarted","Data":"37d6bf7191caafc3cc114fe43c69a47166190e4c93d53a9d71ea651f2cf9561a"} Jan 27 06:58:14 crc kubenswrapper[4796]: I0127 06:58:14.449332 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerStarted","Data":"b635c83d9aa64f8d70962b3567f02bc3f795fb1e080fe6ca3950b7750f04df35"} Jan 27 06:58:14 crc kubenswrapper[4796]: I0127 06:58:14.449349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerStarted","Data":"e78fa13901387cc8657fa7a4e39e0e801063316d48e098d742cc4c6cc3ab1fc3"} Jan 27 06:58:14 crc kubenswrapper[4796]: I0127 06:58:14.755723 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fb58d6-d9a4-4095-be46-a544216963f7" path="/var/lib/kubelet/pods/a1fb58d6-d9a4-4095-be46-a544216963f7/volumes" Jan 27 06:58:16 crc kubenswrapper[4796]: I0127 06:58:16.468316 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerStarted","Data":"03138c1c36aefc976db1234fc377a8d97b660caaab403ba87cdc90fcf2aff13e"} Jan 27 06:58:19 crc kubenswrapper[4796]: I0127 06:58:19.489219 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" event={"ID":"649c8573-78e8-48d4-9ac7-9744e3b0338a","Type":"ContainerStarted","Data":"947ffce4ed3f9ed9d718e62986117d54374af4d33335d1a153d81fd47a0661c5"} Jan 27 06:58:19 crc kubenswrapper[4796]: I0127 06:58:19.490684 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:19 crc kubenswrapper[4796]: I0127 06:58:19.490775 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:19 crc kubenswrapper[4796]: I0127 06:58:19.514916 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" podStartSLOduration=7.514902514 podStartE2EDuration="7.514902514s" podCreationTimestamp="2026-01-27 06:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:58:19.514409082 +0000 UTC m=+720.621376409" watchObservedRunningTime="2026-01-27 06:58:19.514902514 +0000 UTC m=+720.621869851" Jan 27 06:58:19 crc kubenswrapper[4796]: I0127 06:58:19.521348 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:19 crc kubenswrapper[4796]: I0127 06:58:19.523420 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:20 crc kubenswrapper[4796]: I0127 06:58:20.495312 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:23 crc kubenswrapper[4796]: I0127 06:58:23.747066 4796 scope.go:117] "RemoveContainer" containerID="c413a9ee0ad373bb16813255f639547353f408c44b80cb704801bfad74788d62" Jan 27 06:58:23 crc kubenswrapper[4796]: E0127 06:58:23.747978 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-46ql2_openshift-multus(b3555bc2-e335-4479-8b6f-8b5970b27a25)\"" pod="openshift-multus/multus-46ql2" podUID="b3555bc2-e335-4479-8b6f-8b5970b27a25" Jan 27 06:58:35 crc kubenswrapper[4796]: I0127 06:58:35.747270 4796 scope.go:117] "RemoveContainer" containerID="c413a9ee0ad373bb16813255f639547353f408c44b80cb704801bfad74788d62" Jan 27 06:58:36 crc kubenswrapper[4796]: I0127 06:58:36.592740 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-46ql2_b3555bc2-e335-4479-8b6f-8b5970b27a25/kube-multus/2.log" Jan 27 06:58:36 crc kubenswrapper[4796]: I0127 06:58:36.593160 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-46ql2" event={"ID":"b3555bc2-e335-4479-8b6f-8b5970b27a25","Type":"ContainerStarted","Data":"10856d04b251ac93b0fc8e551a6b6910c331352fc672845a537ffbb11a2a5835"} Jan 27 06:58:42 crc kubenswrapper[4796]: I0127 06:58:42.836076 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rgttg" Jan 27 06:58:54 crc kubenswrapper[4796]: I0127 06:58:54.951251 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7"] Jan 27 06:58:54 crc kubenswrapper[4796]: I0127 06:58:54.953880 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:54 crc kubenswrapper[4796]: I0127 06:58:54.955888 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 06:58:54 crc kubenswrapper[4796]: I0127 06:58:54.959108 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7"] Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.062434 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vfbg\" (UniqueName: \"kubernetes.io/projected/14c08575-c4be-4c00-82e0-57ba579cd64e-kube-api-access-7vfbg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.062501 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.062592 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.164372 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.164731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vfbg\" (UniqueName: \"kubernetes.io/projected/14c08575-c4be-4c00-82e0-57ba579cd64e-kube-api-access-7vfbg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.164851 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.165044 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.166714 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.192096 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vfbg\" (UniqueName: \"kubernetes.io/projected/14c08575-c4be-4c00-82e0-57ba579cd64e-kube-api-access-7vfbg\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.269548 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:58:55 crc kubenswrapper[4796]: I0127 06:58:55.754836 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7"] Jan 27 06:58:55 crc kubenswrapper[4796]: W0127 06:58:55.763082 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c08575_c4be_4c00_82e0_57ba579cd64e.slice/crio-4a7c66297d5c144d30ccd27505b492b0b37196eb1dd05c8b62fddd9a8ffc846c WatchSource:0}: Error finding container 4a7c66297d5c144d30ccd27505b492b0b37196eb1dd05c8b62fddd9a8ffc846c: Status 404 returned error can't find the container with id 4a7c66297d5c144d30ccd27505b492b0b37196eb1dd05c8b62fddd9a8ffc846c Jan 27 06:58:56 crc kubenswrapper[4796]: I0127 06:58:56.733037 4796 generic.go:334] "Generic (PLEG): container finished" podID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerID="d8786b98e91bb3673fe087cc1701caed0c1fe0f0bf38973305dca810e6644124" exitCode=0 Jan 27 06:58:56 crc kubenswrapper[4796]: I0127 06:58:56.733183 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" event={"ID":"14c08575-c4be-4c00-82e0-57ba579cd64e","Type":"ContainerDied","Data":"d8786b98e91bb3673fe087cc1701caed0c1fe0f0bf38973305dca810e6644124"} Jan 27 06:58:56 crc kubenswrapper[4796]: I0127 06:58:56.733505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" event={"ID":"14c08575-c4be-4c00-82e0-57ba579cd64e","Type":"ContainerStarted","Data":"4a7c66297d5c144d30ccd27505b492b0b37196eb1dd05c8b62fddd9a8ffc846c"} Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.111623 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bzkbn"] Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.112904 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.131629 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzkbn"] Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.294563 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqc5x\" (UniqueName: \"kubernetes.io/projected/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-kube-api-access-wqc5x\") pod \"redhat-operators-bzkbn\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.294644 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-catalog-content\") pod \"redhat-operators-bzkbn\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.294697 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-utilities\") pod \"redhat-operators-bzkbn\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.396462 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqc5x\" (UniqueName: \"kubernetes.io/projected/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-kube-api-access-wqc5x\") pod \"redhat-operators-bzkbn\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.396507 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-catalog-content\") pod \"redhat-operators-bzkbn\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.396560 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-utilities\") pod \"redhat-operators-bzkbn\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.397018 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-utilities\") pod \"redhat-operators-bzkbn\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.397112 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-catalog-content\") pod \"redhat-operators-bzkbn\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.414180 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqc5x\" (UniqueName: \"kubernetes.io/projected/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-kube-api-access-wqc5x\") pod \"redhat-operators-bzkbn\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.445838 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.656826 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzkbn"] Jan 27 06:58:57 crc kubenswrapper[4796]: W0127 06:58:57.668633 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod360ae9f1_f5d9_4bb8_94cb_96e1556b9d69.slice/crio-69dea03b695d5374487585d2fd519a86ceb37321f6d33e6374d49f5429b201dc WatchSource:0}: Error finding container 69dea03b695d5374487585d2fd519a86ceb37321f6d33e6374d49f5429b201dc: Status 404 returned error can't find the container with id 69dea03b695d5374487585d2fd519a86ceb37321f6d33e6374d49f5429b201dc Jan 27 06:58:57 crc kubenswrapper[4796]: I0127 06:58:57.741804 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzkbn" event={"ID":"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69","Type":"ContainerStarted","Data":"69dea03b695d5374487585d2fd519a86ceb37321f6d33e6374d49f5429b201dc"} Jan 27 06:58:58 crc kubenswrapper[4796]: I0127 06:58:58.750407 4796 generic.go:334] "Generic (PLEG): container finished" podID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerID="ff8782937689fecd601ec420509cefa7f3997ba0a547ff98db936f1d5303838a" exitCode=0 Jan 27 06:58:58 crc kubenswrapper[4796]: I0127 06:58:58.755898 4796 generic.go:334] "Generic (PLEG): container finished" podID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerID="9b46041f1e69ad60618121dc2006cfdd731d77c04f7645234524b79917ef2a24" exitCode=0 Jan 27 06:58:58 crc kubenswrapper[4796]: I0127 06:58:58.759680 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzkbn" event={"ID":"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69","Type":"ContainerDied","Data":"ff8782937689fecd601ec420509cefa7f3997ba0a547ff98db936f1d5303838a"} Jan 27 06:58:58 crc kubenswrapper[4796]: I0127 06:58:58.759716 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" event={"ID":"14c08575-c4be-4c00-82e0-57ba579cd64e","Type":"ContainerDied","Data":"9b46041f1e69ad60618121dc2006cfdd731d77c04f7645234524b79917ef2a24"} Jan 27 06:58:59 crc kubenswrapper[4796]: I0127 06:58:59.763828 4796 generic.go:334] "Generic (PLEG): container finished" podID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerID="e7d638e724171033ef3c353e99f2239bbf689fba8f22f7d9e001bfec079b1e2d" exitCode=0 Jan 27 06:58:59 crc kubenswrapper[4796]: I0127 06:58:59.766007 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" event={"ID":"14c08575-c4be-4c00-82e0-57ba579cd64e","Type":"ContainerDied","Data":"e7d638e724171033ef3c353e99f2239bbf689fba8f22f7d9e001bfec079b1e2d"} Jan 27 06:58:59 crc kubenswrapper[4796]: I0127 06:58:59.768147 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzkbn" event={"ID":"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69","Type":"ContainerStarted","Data":"f9ef00658a5b4d47cb3f59bf5582e04c066df135441bfab8812426b8df3f9f6d"} Jan 27 06:59:00 crc kubenswrapper[4796]: I0127 06:59:00.776201 4796 generic.go:334] "Generic (PLEG): container finished" podID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerID="f9ef00658a5b4d47cb3f59bf5582e04c066df135441bfab8812426b8df3f9f6d" exitCode=0 Jan 27 06:59:00 crc kubenswrapper[4796]: I0127 06:59:00.776265 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzkbn" event={"ID":"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69","Type":"ContainerDied","Data":"f9ef00658a5b4d47cb3f59bf5582e04c066df135441bfab8812426b8df3f9f6d"} Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.029426 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.147105 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vfbg\" (UniqueName: \"kubernetes.io/projected/14c08575-c4be-4c00-82e0-57ba579cd64e-kube-api-access-7vfbg\") pod \"14c08575-c4be-4c00-82e0-57ba579cd64e\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.147204 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-util\") pod \"14c08575-c4be-4c00-82e0-57ba579cd64e\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.147266 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-bundle\") pod \"14c08575-c4be-4c00-82e0-57ba579cd64e\" (UID: \"14c08575-c4be-4c00-82e0-57ba579cd64e\") " Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.148161 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-bundle" (OuterVolumeSpecName: "bundle") pod "14c08575-c4be-4c00-82e0-57ba579cd64e" (UID: "14c08575-c4be-4c00-82e0-57ba579cd64e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.155742 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c08575-c4be-4c00-82e0-57ba579cd64e-kube-api-access-7vfbg" (OuterVolumeSpecName: "kube-api-access-7vfbg") pod "14c08575-c4be-4c00-82e0-57ba579cd64e" (UID: "14c08575-c4be-4c00-82e0-57ba579cd64e"). InnerVolumeSpecName "kube-api-access-7vfbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.250383 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vfbg\" (UniqueName: \"kubernetes.io/projected/14c08575-c4be-4c00-82e0-57ba579cd64e-kube-api-access-7vfbg\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.250432 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.327770 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.340816 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-util" (OuterVolumeSpecName: "util") pod "14c08575-c4be-4c00-82e0-57ba579cd64e" (UID: "14c08575-c4be-4c00-82e0-57ba579cd64e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.355590 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14c08575-c4be-4c00-82e0-57ba579cd64e-util\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.785655 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzkbn" event={"ID":"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69","Type":"ContainerStarted","Data":"2b88b7e20039e951cc3bc8a65cd99ed3bb7b3f4a2638c1464e8809415c910bd1"} Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.788867 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" event={"ID":"14c08575-c4be-4c00-82e0-57ba579cd64e","Type":"ContainerDied","Data":"4a7c66297d5c144d30ccd27505b492b0b37196eb1dd05c8b62fddd9a8ffc846c"} Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.788908 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.788913 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7c66297d5c144d30ccd27505b492b0b37196eb1dd05c8b62fddd9a8ffc846c" Jan 27 06:59:01 crc kubenswrapper[4796]: I0127 06:59:01.815028 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bzkbn" podStartSLOduration=2.220344508 podStartE2EDuration="4.815002996s" podCreationTimestamp="2026-01-27 06:58:57 +0000 UTC" firstStartedPulling="2026-01-27 06:58:58.752248286 +0000 UTC m=+759.859215613" lastFinishedPulling="2026-01-27 06:59:01.346906764 +0000 UTC m=+762.453874101" observedRunningTime="2026-01-27 06:59:01.812059742 +0000 UTC m=+762.919027069" watchObservedRunningTime="2026-01-27 06:59:01.815002996 +0000 UTC m=+762.921970353" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.269596 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-kvmjs"] Jan 27 06:59:06 crc kubenswrapper[4796]: E0127 06:59:06.270382 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerName="util" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.270399 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerName="util" Jan 27 06:59:06 crc kubenswrapper[4796]: E0127 06:59:06.270413 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerName="pull" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.270420 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerName="pull" Jan 27 06:59:06 crc kubenswrapper[4796]: E0127 06:59:06.270437 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerName="extract" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.270445 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerName="extract" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.270588 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c08575-c4be-4c00-82e0-57ba579cd64e" containerName="extract" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.271080 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-kvmjs" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.273337 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tcgzm" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.273702 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.273816 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.284390 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-kvmjs"] Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.321798 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47r4\" (UniqueName: \"kubernetes.io/projected/adb86a43-6bb2-4198-afe4-ca6484e020df-kube-api-access-c47r4\") pod \"nmstate-operator-646758c888-kvmjs\" (UID: \"adb86a43-6bb2-4198-afe4-ca6484e020df\") " pod="openshift-nmstate/nmstate-operator-646758c888-kvmjs" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.423253 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c47r4\" (UniqueName: \"kubernetes.io/projected/adb86a43-6bb2-4198-afe4-ca6484e020df-kube-api-access-c47r4\") pod \"nmstate-operator-646758c888-kvmjs\" (UID: \"adb86a43-6bb2-4198-afe4-ca6484e020df\") " pod="openshift-nmstate/nmstate-operator-646758c888-kvmjs" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.446232 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c47r4\" (UniqueName: \"kubernetes.io/projected/adb86a43-6bb2-4198-afe4-ca6484e020df-kube-api-access-c47r4\") pod \"nmstate-operator-646758c888-kvmjs\" (UID: \"adb86a43-6bb2-4198-afe4-ca6484e020df\") " pod="openshift-nmstate/nmstate-operator-646758c888-kvmjs" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.588596 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-kvmjs" Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.800230 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-kvmjs"] Jan 27 06:59:06 crc kubenswrapper[4796]: I0127 06:59:06.817817 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-kvmjs" event={"ID":"adb86a43-6bb2-4198-afe4-ca6484e020df","Type":"ContainerStarted","Data":"492116e558929be4c20e6d11826b3b730a4cb9ada75a644230a0fea8af346f47"} Jan 27 06:59:07 crc kubenswrapper[4796]: I0127 06:59:07.447398 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:59:07 crc kubenswrapper[4796]: I0127 06:59:07.447454 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:59:08 crc kubenswrapper[4796]: I0127 06:59:08.510816 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bzkbn" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerName="registry-server" probeResult="failure" output=< Jan 27 06:59:08 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Jan 27 06:59:08 crc kubenswrapper[4796]: > Jan 27 06:59:10 crc kubenswrapper[4796]: I0127 06:59:10.843337 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-kvmjs" event={"ID":"adb86a43-6bb2-4198-afe4-ca6484e020df","Type":"ContainerStarted","Data":"45841f47976783fe6a488938e727672bba086279bda7b67b1b58d62955854943"} Jan 27 06:59:10 crc kubenswrapper[4796]: I0127 06:59:10.865975 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-kvmjs" podStartSLOduration=1.9719926669999999 podStartE2EDuration="4.865933773s" podCreationTimestamp="2026-01-27 06:59:06 +0000 UTC" firstStartedPulling="2026-01-27 06:59:06.811450863 +0000 UTC m=+767.918418200" lastFinishedPulling="2026-01-27 06:59:09.705391979 +0000 UTC m=+770.812359306" observedRunningTime="2026-01-27 06:59:10.862520517 +0000 UTC m=+771.969487864" watchObservedRunningTime="2026-01-27 06:59:10.865933773 +0000 UTC m=+771.972901090" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.232166 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6qqbr"] Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.234516 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6qqbr" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.239480 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-jc87p" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.248630 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6qqbr"] Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.253106 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp"] Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.254141 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.255838 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.266937 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-j74n8"] Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.267814 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.280660 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp"] Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.348249 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b54d04c-747b-48f4-92b1-bcece1212861-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bctnp\" (UID: \"7b54d04c-747b-48f4-92b1-bcece1212861\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.348290 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pjhz\" (UniqueName: \"kubernetes.io/projected/9d2d7f37-8d8f-4d7b-a77b-1769764774a3-kube-api-access-5pjhz\") pod \"nmstate-metrics-54757c584b-6qqbr\" (UID: \"9d2d7f37-8d8f-4d7b-a77b-1769764774a3\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6qqbr" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.348316 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqn9x\" (UniqueName: \"kubernetes.io/projected/7b54d04c-747b-48f4-92b1-bcece1212861-kube-api-access-hqn9x\") pod \"nmstate-webhook-8474b5b9d8-bctnp\" (UID: \"7b54d04c-747b-48f4-92b1-bcece1212861\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.367820 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j"] Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.368407 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.371999 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.372361 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.373116 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-p5n52" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.376355 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j"] Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.449604 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-nmstate-lock\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.449841 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b54d04c-747b-48f4-92b1-bcece1212861-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bctnp\" (UID: \"7b54d04c-747b-48f4-92b1-bcece1212861\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.449931 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pjhz\" (UniqueName: \"kubernetes.io/projected/9d2d7f37-8d8f-4d7b-a77b-1769764774a3-kube-api-access-5pjhz\") pod \"nmstate-metrics-54757c584b-6qqbr\" (UID: \"9d2d7f37-8d8f-4d7b-a77b-1769764774a3\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6qqbr" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.450013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-ovs-socket\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.450169 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqn9x\" (UniqueName: \"kubernetes.io/projected/7b54d04c-747b-48f4-92b1-bcece1212861-kube-api-access-hqn9x\") pod \"nmstate-webhook-8474b5b9d8-bctnp\" (UID: \"7b54d04c-747b-48f4-92b1-bcece1212861\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.450252 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-dbus-socket\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.450334 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtl2j\" (UniqueName: \"kubernetes.io/projected/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-kube-api-access-gtl2j\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.469636 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7b54d04c-747b-48f4-92b1-bcece1212861-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bctnp\" (UID: \"7b54d04c-747b-48f4-92b1-bcece1212861\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.473983 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqn9x\" (UniqueName: \"kubernetes.io/projected/7b54d04c-747b-48f4-92b1-bcece1212861-kube-api-access-hqn9x\") pod \"nmstate-webhook-8474b5b9d8-bctnp\" (UID: \"7b54d04c-747b-48f4-92b1-bcece1212861\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.476462 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pjhz\" (UniqueName: \"kubernetes.io/projected/9d2d7f37-8d8f-4d7b-a77b-1769764774a3-kube-api-access-5pjhz\") pod \"nmstate-metrics-54757c584b-6qqbr\" (UID: \"9d2d7f37-8d8f-4d7b-a77b-1769764774a3\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-6qqbr" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.550426 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-6qqbr" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.551702 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtl2j\" (UniqueName: \"kubernetes.io/projected/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-kube-api-access-gtl2j\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.553049 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/879b02ea-4120-4fc0-9be3-2cdabfc554f2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7jd5j\" (UID: \"879b02ea-4120-4fc0-9be3-2cdabfc554f2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.553175 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-nmstate-lock\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.553272 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/879b02ea-4120-4fc0-9be3-2cdabfc554f2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7jd5j\" (UID: \"879b02ea-4120-4fc0-9be3-2cdabfc554f2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.553297 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-nmstate-lock\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.553410 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57g7h\" (UniqueName: \"kubernetes.io/projected/879b02ea-4120-4fc0-9be3-2cdabfc554f2-kube-api-access-57g7h\") pod \"nmstate-console-plugin-7754f76f8b-7jd5j\" (UID: \"879b02ea-4120-4fc0-9be3-2cdabfc554f2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.553494 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-ovs-socket\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.553598 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-dbus-socket\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.553614 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-ovs-socket\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.553958 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-dbus-socket\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.577858 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d787bcd-798jd"] Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.581916 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtl2j\" (UniqueName: \"kubernetes.io/projected/ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0-kube-api-access-gtl2j\") pod \"nmstate-handler-j74n8\" (UID: \"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0\") " pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.585217 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.585488 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.597170 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.624482 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d787bcd-798jd"] Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.657349 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/879b02ea-4120-4fc0-9be3-2cdabfc554f2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7jd5j\" (UID: \"879b02ea-4120-4fc0-9be3-2cdabfc554f2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.657389 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/879b02ea-4120-4fc0-9be3-2cdabfc554f2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7jd5j\" (UID: \"879b02ea-4120-4fc0-9be3-2cdabfc554f2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.657410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57g7h\" (UniqueName: \"kubernetes.io/projected/879b02ea-4120-4fc0-9be3-2cdabfc554f2-kube-api-access-57g7h\") pod \"nmstate-console-plugin-7754f76f8b-7jd5j\" (UID: \"879b02ea-4120-4fc0-9be3-2cdabfc554f2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.658720 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/879b02ea-4120-4fc0-9be3-2cdabfc554f2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-7jd5j\" (UID: \"879b02ea-4120-4fc0-9be3-2cdabfc554f2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.664195 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/879b02ea-4120-4fc0-9be3-2cdabfc554f2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-7jd5j\" (UID: \"879b02ea-4120-4fc0-9be3-2cdabfc554f2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.678115 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57g7h\" (UniqueName: \"kubernetes.io/projected/879b02ea-4120-4fc0-9be3-2cdabfc554f2-kube-api-access-57g7h\") pod \"nmstate-console-plugin-7754f76f8b-7jd5j\" (UID: \"879b02ea-4120-4fc0-9be3-2cdabfc554f2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.685823 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.758200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-trusted-ca-bundle\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.758246 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdv8\" (UniqueName: \"kubernetes.io/projected/903e4b83-2ba7-4ddb-a519-914852dab480-kube-api-access-rtdv8\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.758276 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-oauth-serving-cert\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.758307 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-service-ca\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.758327 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/903e4b83-2ba7-4ddb-a519-914852dab480-console-serving-cert\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.758352 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/903e4b83-2ba7-4ddb-a519-914852dab480-console-oauth-config\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.758435 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-console-config\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.765550 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-6qqbr"] Jan 27 06:59:15 crc kubenswrapper[4796]: W0127 06:59:15.772561 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d2d7f37_8d8f_4d7b_a77b_1769764774a3.slice/crio-17fb0ddf4dba24ff8d98c2e742195f07a7f1c8612aa29775475bafe8b6fa32ac WatchSource:0}: Error finding container 17fb0ddf4dba24ff8d98c2e742195f07a7f1c8612aa29775475bafe8b6fa32ac: Status 404 returned error can't find the container with id 17fb0ddf4dba24ff8d98c2e742195f07a7f1c8612aa29775475bafe8b6fa32ac Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.859337 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-trusted-ca-bundle\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.859399 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdv8\" (UniqueName: \"kubernetes.io/projected/903e4b83-2ba7-4ddb-a519-914852dab480-kube-api-access-rtdv8\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.859440 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-oauth-serving-cert\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.859477 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-service-ca\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.859499 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/903e4b83-2ba7-4ddb-a519-914852dab480-console-serving-cert\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.859547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/903e4b83-2ba7-4ddb-a519-914852dab480-console-oauth-config\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.859595 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-console-config\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.861656 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-oauth-serving-cert\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.861938 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-service-ca\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.862561 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-console-config\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.862645 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/903e4b83-2ba7-4ddb-a519-914852dab480-trusted-ca-bundle\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.865517 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/903e4b83-2ba7-4ddb-a519-914852dab480-console-oauth-config\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.865686 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/903e4b83-2ba7-4ddb-a519-914852dab480-console-serving-cert\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.872194 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-j74n8" event={"ID":"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0","Type":"ContainerStarted","Data":"5dea23560235fa1099f3b444103fa20924f62a880296305989436f941825b2e0"} Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.873405 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6qqbr" event={"ID":"9d2d7f37-8d8f-4d7b-a77b-1769764774a3","Type":"ContainerStarted","Data":"17fb0ddf4dba24ff8d98c2e742195f07a7f1c8612aa29775475bafe8b6fa32ac"} Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.878263 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdv8\" (UniqueName: \"kubernetes.io/projected/903e4b83-2ba7-4ddb-a519-914852dab480-kube-api-access-rtdv8\") pod \"console-f9d787bcd-798jd\" (UID: \"903e4b83-2ba7-4ddb-a519-914852dab480\") " pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.911673 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j"] Jan 27 06:59:15 crc kubenswrapper[4796]: W0127 06:59:15.914820 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod879b02ea_4120_4fc0_9be3_2cdabfc554f2.slice/crio-142e937251b82863ae6ecf1f56dc1f95403b85a066ec1fe412d3059fbc6e5754 WatchSource:0}: Error finding container 142e937251b82863ae6ecf1f56dc1f95403b85a066ec1fe412d3059fbc6e5754: Status 404 returned error can't find the container with id 142e937251b82863ae6ecf1f56dc1f95403b85a066ec1fe412d3059fbc6e5754 Jan 27 06:59:15 crc kubenswrapper[4796]: I0127 06:59:15.933375 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:16 crc kubenswrapper[4796]: I0127 06:59:16.058558 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp"] Jan 27 06:59:16 crc kubenswrapper[4796]: W0127 06:59:16.062109 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b54d04c_747b_48f4_92b1_bcece1212861.slice/crio-7250e3c7ba6c6e0760850f8467f81b0dbb3cba7ddf86a295bf10a602ed53ea18 WatchSource:0}: Error finding container 7250e3c7ba6c6e0760850f8467f81b0dbb3cba7ddf86a295bf10a602ed53ea18: Status 404 returned error can't find the container with id 7250e3c7ba6c6e0760850f8467f81b0dbb3cba7ddf86a295bf10a602ed53ea18 Jan 27 06:59:16 crc kubenswrapper[4796]: I0127 06:59:16.314241 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d787bcd-798jd"] Jan 27 06:59:16 crc kubenswrapper[4796]: W0127 06:59:16.328111 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod903e4b83_2ba7_4ddb_a519_914852dab480.slice/crio-b575f146f99b17304576d34a81220c77f11592871a66cd978a01995570c3e3c3 WatchSource:0}: Error finding container b575f146f99b17304576d34a81220c77f11592871a66cd978a01995570c3e3c3: Status 404 returned error can't find the container with id b575f146f99b17304576d34a81220c77f11592871a66cd978a01995570c3e3c3 Jan 27 06:59:16 crc kubenswrapper[4796]: I0127 06:59:16.878875 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" event={"ID":"879b02ea-4120-4fc0-9be3-2cdabfc554f2","Type":"ContainerStarted","Data":"142e937251b82863ae6ecf1f56dc1f95403b85a066ec1fe412d3059fbc6e5754"} Jan 27 06:59:16 crc kubenswrapper[4796]: I0127 06:59:16.880106 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" event={"ID":"7b54d04c-747b-48f4-92b1-bcece1212861","Type":"ContainerStarted","Data":"7250e3c7ba6c6e0760850f8467f81b0dbb3cba7ddf86a295bf10a602ed53ea18"} Jan 27 06:59:16 crc kubenswrapper[4796]: I0127 06:59:16.881576 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d787bcd-798jd" event={"ID":"903e4b83-2ba7-4ddb-a519-914852dab480","Type":"ContainerStarted","Data":"47530ec1e4e38182599056e784639f2c15fc4b04e4599ded3eb11b79772fc9d8"} Jan 27 06:59:16 crc kubenswrapper[4796]: I0127 06:59:16.881604 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d787bcd-798jd" event={"ID":"903e4b83-2ba7-4ddb-a519-914852dab480","Type":"ContainerStarted","Data":"b575f146f99b17304576d34a81220c77f11592871a66cd978a01995570c3e3c3"} Jan 27 06:59:16 crc kubenswrapper[4796]: I0127 06:59:16.905109 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d787bcd-798jd" podStartSLOduration=1.905085465 podStartE2EDuration="1.905085465s" podCreationTimestamp="2026-01-27 06:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 06:59:16.894624322 +0000 UTC m=+778.001591679" watchObservedRunningTime="2026-01-27 06:59:16.905085465 +0000 UTC m=+778.012052802" Jan 27 06:59:17 crc kubenswrapper[4796]: I0127 06:59:17.496656 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:59:17 crc kubenswrapper[4796]: I0127 06:59:17.541143 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:59:17 crc kubenswrapper[4796]: I0127 06:59:17.734821 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzkbn"] Jan 27 06:59:18 crc kubenswrapper[4796]: I0127 06:59:18.890967 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bzkbn" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerName="registry-server" containerID="cri-o://2b88b7e20039e951cc3bc8a65cd99ed3bb7b3f4a2638c1464e8809415c910bd1" gracePeriod=2 Jan 27 06:59:19 crc kubenswrapper[4796]: I0127 06:59:19.900046 4796 generic.go:334] "Generic (PLEG): container finished" podID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerID="2b88b7e20039e951cc3bc8a65cd99ed3bb7b3f4a2638c1464e8809415c910bd1" exitCode=0 Jan 27 06:59:19 crc kubenswrapper[4796]: I0127 06:59:19.900095 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzkbn" event={"ID":"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69","Type":"ContainerDied","Data":"2b88b7e20039e951cc3bc8a65cd99ed3bb7b3f4a2638c1464e8809415c910bd1"} Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.156968 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.253718 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqc5x\" (UniqueName: \"kubernetes.io/projected/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-kube-api-access-wqc5x\") pod \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.255081 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-catalog-content\") pod \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.255292 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-utilities\") pod \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\" (UID: \"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69\") " Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.257787 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-utilities" (OuterVolumeSpecName: "utilities") pod "360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" (UID: "360ae9f1-f5d9-4bb8-94cb-96e1556b9d69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.260834 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-kube-api-access-wqc5x" (OuterVolumeSpecName: "kube-api-access-wqc5x") pod "360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" (UID: "360ae9f1-f5d9-4bb8-94cb-96e1556b9d69"). InnerVolumeSpecName "kube-api-access-wqc5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.356829 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqc5x\" (UniqueName: \"kubernetes.io/projected/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-kube-api-access-wqc5x\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.356867 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.392102 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" (UID: "360ae9f1-f5d9-4bb8-94cb-96e1556b9d69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.458513 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.929185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzkbn" event={"ID":"360ae9f1-f5d9-4bb8-94cb-96e1556b9d69","Type":"ContainerDied","Data":"69dea03b695d5374487585d2fd519a86ceb37321f6d33e6374d49f5429b201dc"} Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.929683 4796 scope.go:117] "RemoveContainer" containerID="2b88b7e20039e951cc3bc8a65cd99ed3bb7b3f4a2638c1464e8809415c910bd1" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.929308 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzkbn" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.960100 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzkbn"] Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.960155 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bzkbn"] Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.961688 4796 scope.go:117] "RemoveContainer" containerID="f9ef00658a5b4d47cb3f59bf5582e04c066df135441bfab8812426b8df3f9f6d" Jan 27 06:59:22 crc kubenswrapper[4796]: I0127 06:59:22.983769 4796 scope.go:117] "RemoveContainer" containerID="ff8782937689fecd601ec420509cefa7f3997ba0a547ff98db936f1d5303838a" Jan 27 06:59:23 crc kubenswrapper[4796]: I0127 06:59:23.942985 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-j74n8" event={"ID":"ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0","Type":"ContainerStarted","Data":"21dc3396c636374c83c9dc4de6cfe92884f96f96f85185d4aa51705c5d7edb36"} Jan 27 06:59:23 crc kubenswrapper[4796]: I0127 06:59:23.943293 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:23 crc kubenswrapper[4796]: I0127 06:59:23.945925 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" event={"ID":"879b02ea-4120-4fc0-9be3-2cdabfc554f2","Type":"ContainerStarted","Data":"44a768014a3ea4cdb80639dbef9eb6b98055698b1df4f620ba21eb2c8a0f1fb9"} Jan 27 06:59:23 crc kubenswrapper[4796]: I0127 06:59:23.950456 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6qqbr" event={"ID":"9d2d7f37-8d8f-4d7b-a77b-1769764774a3","Type":"ContainerStarted","Data":"6d005a58d4f5ac80d6f53d2e3feb9a935e72e93cb2f0223aea822ffbdc95b214"} Jan 27 06:59:23 crc kubenswrapper[4796]: I0127 06:59:23.953804 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" event={"ID":"7b54d04c-747b-48f4-92b1-bcece1212861","Type":"ContainerStarted","Data":"31750c140dac8e1faf1f011fc72e9c0379df972f6fcd52b84d400520af50c780"} Jan 27 06:59:23 crc kubenswrapper[4796]: I0127 06:59:23.954268 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:23 crc kubenswrapper[4796]: I0127 06:59:23.971489 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-j74n8" podStartSLOduration=1.967086626 podStartE2EDuration="8.971405417s" podCreationTimestamp="2026-01-27 06:59:15 +0000 UTC" firstStartedPulling="2026-01-27 06:59:15.649724114 +0000 UTC m=+776.756691441" lastFinishedPulling="2026-01-27 06:59:22.654042875 +0000 UTC m=+783.761010232" observedRunningTime="2026-01-27 06:59:23.962707388 +0000 UTC m=+785.069674775" watchObservedRunningTime="2026-01-27 06:59:23.971405417 +0000 UTC m=+785.078372784" Jan 27 06:59:23 crc kubenswrapper[4796]: I0127 06:59:23.991603 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-7jd5j" podStartSLOduration=2.254097286 podStartE2EDuration="8.991532665s" podCreationTimestamp="2026-01-27 06:59:15 +0000 UTC" firstStartedPulling="2026-01-27 06:59:15.917285914 +0000 UTC m=+777.024253241" lastFinishedPulling="2026-01-27 06:59:22.654721263 +0000 UTC m=+783.761688620" observedRunningTime="2026-01-27 06:59:23.984851397 +0000 UTC m=+785.091818774" watchObservedRunningTime="2026-01-27 06:59:23.991532665 +0000 UTC m=+785.098537044" Jan 27 06:59:24 crc kubenswrapper[4796]: I0127 06:59:24.017315 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" podStartSLOduration=2.384932491 podStartE2EDuration="9.017265404s" podCreationTimestamp="2026-01-27 06:59:15 +0000 UTC" firstStartedPulling="2026-01-27 06:59:16.063938857 +0000 UTC m=+777.170906194" lastFinishedPulling="2026-01-27 06:59:22.69627178 +0000 UTC m=+783.803239107" observedRunningTime="2026-01-27 06:59:24.014518445 +0000 UTC m=+785.121485802" watchObservedRunningTime="2026-01-27 06:59:24.017265404 +0000 UTC m=+785.124232751" Jan 27 06:59:24 crc kubenswrapper[4796]: I0127 06:59:24.754031 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" path="/var/lib/kubelet/pods/360ae9f1-f5d9-4bb8-94cb-96e1556b9d69/volumes" Jan 27 06:59:25 crc kubenswrapper[4796]: I0127 06:59:25.934627 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:25 crc kubenswrapper[4796]: I0127 06:59:25.934718 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:25 crc kubenswrapper[4796]: I0127 06:59:25.942204 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:25 crc kubenswrapper[4796]: I0127 06:59:25.976903 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d787bcd-798jd" Jan 27 06:59:26 crc kubenswrapper[4796]: I0127 06:59:26.061389 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r6xbk"] Jan 27 06:59:27 crc kubenswrapper[4796]: I0127 06:59:27.990568 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-6qqbr" event={"ID":"9d2d7f37-8d8f-4d7b-a77b-1769764774a3","Type":"ContainerStarted","Data":"b2f224e8da7615867294efd27c675ced75beb4733d8ac5e37f8b250be4a3b16f"} Jan 27 06:59:28 crc kubenswrapper[4796]: I0127 06:59:28.009041 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-6qqbr" podStartSLOduration=1.844298394 podStartE2EDuration="13.00901926s" podCreationTimestamp="2026-01-27 06:59:15 +0000 UTC" firstStartedPulling="2026-01-27 06:59:15.775198154 +0000 UTC m=+776.882165481" lastFinishedPulling="2026-01-27 06:59:26.93991898 +0000 UTC m=+788.046886347" observedRunningTime="2026-01-27 06:59:28.008519547 +0000 UTC m=+789.115486874" watchObservedRunningTime="2026-01-27 06:59:28.00901926 +0000 UTC m=+789.115986587" Jan 27 06:59:30 crc kubenswrapper[4796]: I0127 06:59:30.625515 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-j74n8" Jan 27 06:59:33 crc kubenswrapper[4796]: I0127 06:59:33.788062 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 06:59:33 crc kubenswrapper[4796]: I0127 06:59:33.788583 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 06:59:35 crc kubenswrapper[4796]: I0127 06:59:35.594832 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bctnp" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.604621 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9nlw8"] Jan 27 06:59:38 crc kubenswrapper[4796]: E0127 06:59:38.605327 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerName="extract-utilities" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.605349 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerName="extract-utilities" Jan 27 06:59:38 crc kubenswrapper[4796]: E0127 06:59:38.605374 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerName="extract-content" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.605386 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerName="extract-content" Jan 27 06:59:38 crc kubenswrapper[4796]: E0127 06:59:38.605413 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerName="registry-server" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.605425 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerName="registry-server" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.605617 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="360ae9f1-f5d9-4bb8-94cb-96e1556b9d69" containerName="registry-server" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.606839 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.608397 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nlw8"] Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.728458 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-catalog-content\") pod \"community-operators-9nlw8\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.728894 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-utilities\") pod \"community-operators-9nlw8\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.729045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gldlx\" (UniqueName: \"kubernetes.io/projected/d3f80031-dd83-44e3-9567-06830b504490-kube-api-access-gldlx\") pod \"community-operators-9nlw8\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.829952 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gldlx\" (UniqueName: \"kubernetes.io/projected/d3f80031-dd83-44e3-9567-06830b504490-kube-api-access-gldlx\") pod \"community-operators-9nlw8\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.830014 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-catalog-content\") pod \"community-operators-9nlw8\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.830056 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-utilities\") pod \"community-operators-9nlw8\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.830490 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-utilities\") pod \"community-operators-9nlw8\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.830743 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-catalog-content\") pod \"community-operators-9nlw8\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.867952 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gldlx\" (UniqueName: \"kubernetes.io/projected/d3f80031-dd83-44e3-9567-06830b504490-kube-api-access-gldlx\") pod \"community-operators-9nlw8\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:38 crc kubenswrapper[4796]: I0127 06:59:38.938578 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:39 crc kubenswrapper[4796]: I0127 06:59:39.255488 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9nlw8"] Jan 27 06:59:40 crc kubenswrapper[4796]: I0127 06:59:40.068953 4796 generic.go:334] "Generic (PLEG): container finished" podID="d3f80031-dd83-44e3-9567-06830b504490" containerID="cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d" exitCode=0 Jan 27 06:59:40 crc kubenswrapper[4796]: I0127 06:59:40.069009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlw8" event={"ID":"d3f80031-dd83-44e3-9567-06830b504490","Type":"ContainerDied","Data":"cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d"} Jan 27 06:59:40 crc kubenswrapper[4796]: I0127 06:59:40.069037 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlw8" event={"ID":"d3f80031-dd83-44e3-9567-06830b504490","Type":"ContainerStarted","Data":"5b3900e99f344ccab73c2d3c92702dba7c6566948994920040adb92155fceb8a"} Jan 27 06:59:41 crc kubenswrapper[4796]: I0127 06:59:41.077256 4796 generic.go:334] "Generic (PLEG): container finished" podID="d3f80031-dd83-44e3-9567-06830b504490" containerID="5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6" exitCode=0 Jan 27 06:59:41 crc kubenswrapper[4796]: I0127 06:59:41.077349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlw8" event={"ID":"d3f80031-dd83-44e3-9567-06830b504490","Type":"ContainerDied","Data":"5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6"} Jan 27 06:59:42 crc kubenswrapper[4796]: I0127 06:59:42.086218 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlw8" event={"ID":"d3f80031-dd83-44e3-9567-06830b504490","Type":"ContainerStarted","Data":"e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54"} Jan 27 06:59:42 crc kubenswrapper[4796]: I0127 06:59:42.111608 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9nlw8" podStartSLOduration=2.717323612 podStartE2EDuration="4.111550862s" podCreationTimestamp="2026-01-27 06:59:38 +0000 UTC" firstStartedPulling="2026-01-27 06:59:40.070854574 +0000 UTC m=+801.177821911" lastFinishedPulling="2026-01-27 06:59:41.465081834 +0000 UTC m=+802.572049161" observedRunningTime="2026-01-27 06:59:42.10825597 +0000 UTC m=+803.215223347" watchObservedRunningTime="2026-01-27 06:59:42.111550862 +0000 UTC m=+803.218518219" Jan 27 06:59:48 crc kubenswrapper[4796]: I0127 06:59:48.940229 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:48 crc kubenswrapper[4796]: I0127 06:59:48.940942 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:48 crc kubenswrapper[4796]: I0127 06:59:48.983087 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.200255 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.380981 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5"] Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.382954 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.387415 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.396454 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5"] Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.496521 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlfdf\" (UniqueName: \"kubernetes.io/projected/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-kube-api-access-tlfdf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.496648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.496746 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.598215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.598386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlfdf\" (UniqueName: \"kubernetes.io/projected/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-kube-api-access-tlfdf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.598441 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.598871 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.599061 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.625920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlfdf\" (UniqueName: \"kubernetes.io/projected/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-kube-api-access-tlfdf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:49 crc kubenswrapper[4796]: I0127 06:59:49.715665 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:50 crc kubenswrapper[4796]: I0127 06:59:50.176160 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5"] Jan 27 06:59:50 crc kubenswrapper[4796]: W0127 06:59:50.187210 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aed5efe_8cb6_4dc1_b17a_331b5bfd64ab.slice/crio-3797c152cc779b337e489d58b8951a20cc7ded674f4ecb7db06e3f7ded6d8ef7 WatchSource:0}: Error finding container 3797c152cc779b337e489d58b8951a20cc7ded674f4ecb7db06e3f7ded6d8ef7: Status 404 returned error can't find the container with id 3797c152cc779b337e489d58b8951a20cc7ded674f4ecb7db06e3f7ded6d8ef7 Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.129689 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-r6xbk" podUID="00061f00-b799-407e-8b71-30de57b92847" containerName="console" containerID="cri-o://c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd" gracePeriod=15 Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.183013 4796 generic.go:334] "Generic (PLEG): container finished" podID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerID="dd4d266cb2365d8ded7a9a40aeef225e781399682fbc8e68d1ad6e217b886bc5" exitCode=0 Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.183067 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" event={"ID":"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab","Type":"ContainerDied","Data":"dd4d266cb2365d8ded7a9a40aeef225e781399682fbc8e68d1ad6e217b886bc5"} Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.183100 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" event={"ID":"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab","Type":"ContainerStarted","Data":"3797c152cc779b337e489d58b8951a20cc7ded674f4ecb7db06e3f7ded6d8ef7"} Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.730870 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r6xbk_00061f00-b799-407e-8b71-30de57b92847/console/0.log" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.731232 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.826929 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-oauth-config\") pod \"00061f00-b799-407e-8b71-30de57b92847\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.827008 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-serving-cert\") pod \"00061f00-b799-407e-8b71-30de57b92847\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.827041 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px6rg\" (UniqueName: \"kubernetes.io/projected/00061f00-b799-407e-8b71-30de57b92847-kube-api-access-px6rg\") pod \"00061f00-b799-407e-8b71-30de57b92847\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.827100 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-trusted-ca-bundle\") pod \"00061f00-b799-407e-8b71-30de57b92847\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.827127 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-service-ca\") pod \"00061f00-b799-407e-8b71-30de57b92847\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.827339 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-oauth-serving-cert\") pod \"00061f00-b799-407e-8b71-30de57b92847\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.828120 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "00061f00-b799-407e-8b71-30de57b92847" (UID: "00061f00-b799-407e-8b71-30de57b92847"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.828204 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "00061f00-b799-407e-8b71-30de57b92847" (UID: "00061f00-b799-407e-8b71-30de57b92847"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.828164 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-service-ca" (OuterVolumeSpecName: "service-ca") pod "00061f00-b799-407e-8b71-30de57b92847" (UID: "00061f00-b799-407e-8b71-30de57b92847"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.828295 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-console-config\") pod \"00061f00-b799-407e-8b71-30de57b92847\" (UID: \"00061f00-b799-407e-8b71-30de57b92847\") " Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.828784 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-console-config" (OuterVolumeSpecName: "console-config") pod "00061f00-b799-407e-8b71-30de57b92847" (UID: "00061f00-b799-407e-8b71-30de57b92847"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.829135 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.829158 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-console-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.829169 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.829179 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/00061f00-b799-407e-8b71-30de57b92847-service-ca\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.832692 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "00061f00-b799-407e-8b71-30de57b92847" (UID: "00061f00-b799-407e-8b71-30de57b92847"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.832686 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00061f00-b799-407e-8b71-30de57b92847-kube-api-access-px6rg" (OuterVolumeSpecName: "kube-api-access-px6rg") pod "00061f00-b799-407e-8b71-30de57b92847" (UID: "00061f00-b799-407e-8b71-30de57b92847"). InnerVolumeSpecName "kube-api-access-px6rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.833337 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "00061f00-b799-407e-8b71-30de57b92847" (UID: "00061f00-b799-407e-8b71-30de57b92847"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.930862 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.930918 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/00061f00-b799-407e-8b71-30de57b92847-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:51 crc kubenswrapper[4796]: I0127 06:59:51.930937 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px6rg\" (UniqueName: \"kubernetes.io/projected/00061f00-b799-407e-8b71-30de57b92847-kube-api-access-px6rg\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.192153 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-r6xbk_00061f00-b799-407e-8b71-30de57b92847/console/0.log" Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.192249 4796 generic.go:334] "Generic (PLEG): container finished" podID="00061f00-b799-407e-8b71-30de57b92847" containerID="c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd" exitCode=2 Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.192294 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r6xbk" event={"ID":"00061f00-b799-407e-8b71-30de57b92847","Type":"ContainerDied","Data":"c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd"} Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.192335 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-r6xbk" event={"ID":"00061f00-b799-407e-8b71-30de57b92847","Type":"ContainerDied","Data":"0af13251b54d8f4efa711f3cd8e48ce4e4e7241b1458f77f0566d4092e99d0f3"} Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.192365 4796 scope.go:117] "RemoveContainer" containerID="c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd" Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.192362 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-r6xbk" Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.227741 4796 scope.go:117] "RemoveContainer" containerID="c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd" Jan 27 06:59:52 crc kubenswrapper[4796]: E0127 06:59:52.228970 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd\": container with ID starting with c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd not found: ID does not exist" containerID="c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd" Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.229046 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd"} err="failed to get container status \"c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd\": rpc error: code = NotFound desc = could not find container \"c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd\": container with ID starting with c68d4654c401b0603b8a27cbfa9e7307fcaec65a2d4edd786a6e1de6950aedcd not found: ID does not exist" Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.246928 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-r6xbk"] Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.254453 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-r6xbk"] Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.528368 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nlw8"] Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.528632 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9nlw8" podUID="d3f80031-dd83-44e3-9567-06830b504490" containerName="registry-server" containerID="cri-o://e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54" gracePeriod=2 Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.761475 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00061f00-b799-407e-8b71-30de57b92847" path="/var/lib/kubelet/pods/00061f00-b799-407e-8b71-30de57b92847/volumes" Jan 27 06:59:52 crc kubenswrapper[4796]: I0127 06:59:52.914097 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.047471 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-catalog-content\") pod \"d3f80031-dd83-44e3-9567-06830b504490\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.047664 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-utilities\") pod \"d3f80031-dd83-44e3-9567-06830b504490\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.047787 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gldlx\" (UniqueName: \"kubernetes.io/projected/d3f80031-dd83-44e3-9567-06830b504490-kube-api-access-gldlx\") pod \"d3f80031-dd83-44e3-9567-06830b504490\" (UID: \"d3f80031-dd83-44e3-9567-06830b504490\") " Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.048480 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-utilities" (OuterVolumeSpecName: "utilities") pod "d3f80031-dd83-44e3-9567-06830b504490" (UID: "d3f80031-dd83-44e3-9567-06830b504490"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.057079 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f80031-dd83-44e3-9567-06830b504490-kube-api-access-gldlx" (OuterVolumeSpecName: "kube-api-access-gldlx") pod "d3f80031-dd83-44e3-9567-06830b504490" (UID: "d3f80031-dd83-44e3-9567-06830b504490"). InnerVolumeSpecName "kube-api-access-gldlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.114502 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3f80031-dd83-44e3-9567-06830b504490" (UID: "d3f80031-dd83-44e3-9567-06830b504490"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.150223 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.150262 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gldlx\" (UniqueName: \"kubernetes.io/projected/d3f80031-dd83-44e3-9567-06830b504490-kube-api-access-gldlx\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.150278 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3f80031-dd83-44e3-9567-06830b504490-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.201607 4796 generic.go:334] "Generic (PLEG): container finished" podID="d3f80031-dd83-44e3-9567-06830b504490" containerID="e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54" exitCode=0 Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.201676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlw8" event={"ID":"d3f80031-dd83-44e3-9567-06830b504490","Type":"ContainerDied","Data":"e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54"} Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.201707 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9nlw8" event={"ID":"d3f80031-dd83-44e3-9567-06830b504490","Type":"ContainerDied","Data":"5b3900e99f344ccab73c2d3c92702dba7c6566948994920040adb92155fceb8a"} Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.201725 4796 scope.go:117] "RemoveContainer" containerID="e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.201827 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9nlw8" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.207092 4796 generic.go:334] "Generic (PLEG): container finished" podID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerID="5a4c338c33416bc7c04ae37a8b0ad16c36966cd738b16abfb1a9b744d66e80e3" exitCode=0 Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.207126 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" event={"ID":"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab","Type":"ContainerDied","Data":"5a4c338c33416bc7c04ae37a8b0ad16c36966cd738b16abfb1a9b744d66e80e3"} Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.236229 4796 scope.go:117] "RemoveContainer" containerID="5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.239647 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9nlw8"] Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.244989 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9nlw8"] Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.252183 4796 scope.go:117] "RemoveContainer" containerID="cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.288751 4796 scope.go:117] "RemoveContainer" containerID="e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54" Jan 27 06:59:53 crc kubenswrapper[4796]: E0127 06:59:53.289113 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54\": container with ID starting with e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54 not found: ID does not exist" containerID="e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.289182 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54"} err="failed to get container status \"e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54\": rpc error: code = NotFound desc = could not find container \"e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54\": container with ID starting with e4e1dde69f11562c6e90289ef35aa0e619b271508e933504bd9673eea8728a54 not found: ID does not exist" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.289221 4796 scope.go:117] "RemoveContainer" containerID="5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6" Jan 27 06:59:53 crc kubenswrapper[4796]: E0127 06:59:53.289621 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6\": container with ID starting with 5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6 not found: ID does not exist" containerID="5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.289653 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6"} err="failed to get container status \"5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6\": rpc error: code = NotFound desc = could not find container \"5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6\": container with ID starting with 5c7743fe1a71329ebee47675a03e06d8064ce3cd6c9042eac4d1e7c5821d37b6 not found: ID does not exist" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.289680 4796 scope.go:117] "RemoveContainer" containerID="cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d" Jan 27 06:59:53 crc kubenswrapper[4796]: E0127 06:59:53.289924 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d\": container with ID starting with cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d not found: ID does not exist" containerID="cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d" Jan 27 06:59:53 crc kubenswrapper[4796]: I0127 06:59:53.289959 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d"} err="failed to get container status \"cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d\": rpc error: code = NotFound desc = could not find container \"cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d\": container with ID starting with cba3d25303cea4fd94453c50579b6470d9917fb7f5308d49f1c134d6882a360d not found: ID does not exist" Jan 27 06:59:54 crc kubenswrapper[4796]: I0127 06:59:54.222270 4796 generic.go:334] "Generic (PLEG): container finished" podID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerID="c81de3203521d22bf7b8afa630b7c86cf940262af900ecea86f58c77fff7945b" exitCode=0 Jan 27 06:59:54 crc kubenswrapper[4796]: I0127 06:59:54.222352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" event={"ID":"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab","Type":"ContainerDied","Data":"c81de3203521d22bf7b8afa630b7c86cf940262af900ecea86f58c77fff7945b"} Jan 27 06:59:54 crc kubenswrapper[4796]: I0127 06:59:54.755853 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f80031-dd83-44e3-9567-06830b504490" path="/var/lib/kubelet/pods/d3f80031-dd83-44e3-9567-06830b504490/volumes" Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.542198 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.685964 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlfdf\" (UniqueName: \"kubernetes.io/projected/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-kube-api-access-tlfdf\") pod \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.686016 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-bundle\") pod \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.686054 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-util\") pod \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\" (UID: \"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab\") " Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.687270 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-bundle" (OuterVolumeSpecName: "bundle") pod "7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" (UID: "7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.691030 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-kube-api-access-tlfdf" (OuterVolumeSpecName: "kube-api-access-tlfdf") pod "7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" (UID: "7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab"). InnerVolumeSpecName "kube-api-access-tlfdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.699385 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-util" (OuterVolumeSpecName: "util") pod "7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" (UID: "7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.787074 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlfdf\" (UniqueName: \"kubernetes.io/projected/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-kube-api-access-tlfdf\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.787112 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:55 crc kubenswrapper[4796]: I0127 06:59:55.787123 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab-util\") on node \"crc\" DevicePath \"\"" Jan 27 06:59:56 crc kubenswrapper[4796]: I0127 06:59:56.248965 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" event={"ID":"7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab","Type":"ContainerDied","Data":"3797c152cc779b337e489d58b8951a20cc7ded674f4ecb7db06e3f7ded6d8ef7"} Jan 27 06:59:56 crc kubenswrapper[4796]: I0127 06:59:56.249107 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3797c152cc779b337e489d58b8951a20cc7ded674f4ecb7db06e3f7ded6d8ef7" Jan 27 06:59:56 crc kubenswrapper[4796]: I0127 06:59:56.249229 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.226516 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk"] Jan 27 07:00:00 crc kubenswrapper[4796]: E0127 07:00:00.226792 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f80031-dd83-44e3-9567-06830b504490" containerName="extract-content" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.226809 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f80031-dd83-44e3-9567-06830b504490" containerName="extract-content" Jan 27 07:00:00 crc kubenswrapper[4796]: E0127 07:00:00.226822 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00061f00-b799-407e-8b71-30de57b92847" containerName="console" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.226831 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="00061f00-b799-407e-8b71-30de57b92847" containerName="console" Jan 27 07:00:00 crc kubenswrapper[4796]: E0127 07:00:00.226849 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerName="extract" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.226857 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerName="extract" Jan 27 07:00:00 crc kubenswrapper[4796]: E0127 07:00:00.226869 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f80031-dd83-44e3-9567-06830b504490" containerName="registry-server" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.226878 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f80031-dd83-44e3-9567-06830b504490" containerName="registry-server" Jan 27 07:00:00 crc kubenswrapper[4796]: E0127 07:00:00.226893 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerName="pull" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.226900 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerName="pull" Jan 27 07:00:00 crc kubenswrapper[4796]: E0127 07:00:00.226914 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f80031-dd83-44e3-9567-06830b504490" containerName="extract-utilities" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.226922 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f80031-dd83-44e3-9567-06830b504490" containerName="extract-utilities" Jan 27 07:00:00 crc kubenswrapper[4796]: E0127 07:00:00.226935 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerName="util" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.226942 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerName="util" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.227154 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="00061f00-b799-407e-8b71-30de57b92847" containerName="console" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.227184 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f80031-dd83-44e3-9567-06830b504490" containerName="registry-server" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.227204 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab" containerName="extract" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.227756 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.232634 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.232725 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.246637 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk"] Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.347158 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/218a2b48-63ca-462b-99ef-c4f829259d37-secret-volume\") pod \"collect-profiles-29491620-nz9tk\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.347231 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbt2\" (UniqueName: \"kubernetes.io/projected/218a2b48-63ca-462b-99ef-c4f829259d37-kube-api-access-grbt2\") pod \"collect-profiles-29491620-nz9tk\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.347401 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/218a2b48-63ca-462b-99ef-c4f829259d37-config-volume\") pod \"collect-profiles-29491620-nz9tk\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.449228 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/218a2b48-63ca-462b-99ef-c4f829259d37-config-volume\") pod \"collect-profiles-29491620-nz9tk\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.449587 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/218a2b48-63ca-462b-99ef-c4f829259d37-secret-volume\") pod \"collect-profiles-29491620-nz9tk\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.449698 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbt2\" (UniqueName: \"kubernetes.io/projected/218a2b48-63ca-462b-99ef-c4f829259d37-kube-api-access-grbt2\") pod \"collect-profiles-29491620-nz9tk\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.451501 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/218a2b48-63ca-462b-99ef-c4f829259d37-config-volume\") pod \"collect-profiles-29491620-nz9tk\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.454745 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/218a2b48-63ca-462b-99ef-c4f829259d37-secret-volume\") pod \"collect-profiles-29491620-nz9tk\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.466801 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbt2\" (UniqueName: \"kubernetes.io/projected/218a2b48-63ca-462b-99ef-c4f829259d37-kube-api-access-grbt2\") pod \"collect-profiles-29491620-nz9tk\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.548445 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:00 crc kubenswrapper[4796]: I0127 07:00:00.788294 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk"] Jan 27 07:00:01 crc kubenswrapper[4796]: I0127 07:00:01.277638 4796 generic.go:334] "Generic (PLEG): container finished" podID="218a2b48-63ca-462b-99ef-c4f829259d37" containerID="c35eeb8b9a6b4e2e51ffb8096de0817b7d129b21b146a84b2c14a1e136d0b02f" exitCode=0 Jan 27 07:00:01 crc kubenswrapper[4796]: I0127 07:00:01.277827 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" event={"ID":"218a2b48-63ca-462b-99ef-c4f829259d37","Type":"ContainerDied","Data":"c35eeb8b9a6b4e2e51ffb8096de0817b7d129b21b146a84b2c14a1e136d0b02f"} Jan 27 07:00:01 crc kubenswrapper[4796]: I0127 07:00:01.277957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" event={"ID":"218a2b48-63ca-462b-99ef-c4f829259d37","Type":"ContainerStarted","Data":"84576b4d71f856fb6ecf84a7c2a51848cc288011812367eec2201c525054fe44"} Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.497709 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.578505 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/218a2b48-63ca-462b-99ef-c4f829259d37-config-volume\") pod \"218a2b48-63ca-462b-99ef-c4f829259d37\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.578681 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grbt2\" (UniqueName: \"kubernetes.io/projected/218a2b48-63ca-462b-99ef-c4f829259d37-kube-api-access-grbt2\") pod \"218a2b48-63ca-462b-99ef-c4f829259d37\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.578751 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/218a2b48-63ca-462b-99ef-c4f829259d37-secret-volume\") pod \"218a2b48-63ca-462b-99ef-c4f829259d37\" (UID: \"218a2b48-63ca-462b-99ef-c4f829259d37\") " Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.579336 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/218a2b48-63ca-462b-99ef-c4f829259d37-config-volume" (OuterVolumeSpecName: "config-volume") pod "218a2b48-63ca-462b-99ef-c4f829259d37" (UID: "218a2b48-63ca-462b-99ef-c4f829259d37"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.585055 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218a2b48-63ca-462b-99ef-c4f829259d37-kube-api-access-grbt2" (OuterVolumeSpecName: "kube-api-access-grbt2") pod "218a2b48-63ca-462b-99ef-c4f829259d37" (UID: "218a2b48-63ca-462b-99ef-c4f829259d37"). InnerVolumeSpecName "kube-api-access-grbt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.585346 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218a2b48-63ca-462b-99ef-c4f829259d37-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "218a2b48-63ca-462b-99ef-c4f829259d37" (UID: "218a2b48-63ca-462b-99ef-c4f829259d37"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.679578 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grbt2\" (UniqueName: \"kubernetes.io/projected/218a2b48-63ca-462b-99ef-c4f829259d37-kube-api-access-grbt2\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.679614 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/218a2b48-63ca-462b-99ef-c4f829259d37-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:02 crc kubenswrapper[4796]: I0127 07:00:02.679624 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/218a2b48-63ca-462b-99ef-c4f829259d37-config-volume\") on node \"crc\" DevicePath \"\"" Jan 27 07:00:03 crc kubenswrapper[4796]: I0127 07:00:03.290323 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" event={"ID":"218a2b48-63ca-462b-99ef-c4f829259d37","Type":"ContainerDied","Data":"84576b4d71f856fb6ecf84a7c2a51848cc288011812367eec2201c525054fe44"} Jan 27 07:00:03 crc kubenswrapper[4796]: I0127 07:00:03.290361 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491620-nz9tk" Jan 27 07:00:03 crc kubenswrapper[4796]: I0127 07:00:03.290365 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84576b4d71f856fb6ecf84a7c2a51848cc288011812367eec2201c525054fe44" Jan 27 07:00:03 crc kubenswrapper[4796]: I0127 07:00:03.788434 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:00:03 crc kubenswrapper[4796]: I0127 07:00:03.788784 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.836675 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq"] Jan 27 07:00:06 crc kubenswrapper[4796]: E0127 07:00:06.836899 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="218a2b48-63ca-462b-99ef-c4f829259d37" containerName="collect-profiles" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.836911 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="218a2b48-63ca-462b-99ef-c4f829259d37" containerName="collect-profiles" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.837008 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="218a2b48-63ca-462b-99ef-c4f829259d37" containerName="collect-profiles" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.837355 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.841389 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.841598 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.842082 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.842604 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.846044 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mgrpj" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.861998 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq"] Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.943864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wrr5\" (UniqueName: \"kubernetes.io/projected/95eec457-204e-4db3-8efe-973e0db5db30-kube-api-access-2wrr5\") pod \"metallb-operator-controller-manager-549fb8c6d4-w79rq\" (UID: \"95eec457-204e-4db3-8efe-973e0db5db30\") " pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.943911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95eec457-204e-4db3-8efe-973e0db5db30-webhook-cert\") pod \"metallb-operator-controller-manager-549fb8c6d4-w79rq\" (UID: \"95eec457-204e-4db3-8efe-973e0db5db30\") " pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:06 crc kubenswrapper[4796]: I0127 07:00:06.943957 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95eec457-204e-4db3-8efe-973e0db5db30-apiservice-cert\") pod \"metallb-operator-controller-manager-549fb8c6d4-w79rq\" (UID: \"95eec457-204e-4db3-8efe-973e0db5db30\") " pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.045121 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wrr5\" (UniqueName: \"kubernetes.io/projected/95eec457-204e-4db3-8efe-973e0db5db30-kube-api-access-2wrr5\") pod \"metallb-operator-controller-manager-549fb8c6d4-w79rq\" (UID: \"95eec457-204e-4db3-8efe-973e0db5db30\") " pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.045196 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95eec457-204e-4db3-8efe-973e0db5db30-webhook-cert\") pod \"metallb-operator-controller-manager-549fb8c6d4-w79rq\" (UID: \"95eec457-204e-4db3-8efe-973e0db5db30\") " pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.045242 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95eec457-204e-4db3-8efe-973e0db5db30-apiservice-cert\") pod \"metallb-operator-controller-manager-549fb8c6d4-w79rq\" (UID: \"95eec457-204e-4db3-8efe-973e0db5db30\") " pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.051153 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/95eec457-204e-4db3-8efe-973e0db5db30-webhook-cert\") pod \"metallb-operator-controller-manager-549fb8c6d4-w79rq\" (UID: \"95eec457-204e-4db3-8efe-973e0db5db30\") " pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.051153 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/95eec457-204e-4db3-8efe-973e0db5db30-apiservice-cert\") pod \"metallb-operator-controller-manager-549fb8c6d4-w79rq\" (UID: \"95eec457-204e-4db3-8efe-973e0db5db30\") " pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.065647 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wrr5\" (UniqueName: \"kubernetes.io/projected/95eec457-204e-4db3-8efe-973e0db5db30-kube-api-access-2wrr5\") pod \"metallb-operator-controller-manager-549fb8c6d4-w79rq\" (UID: \"95eec457-204e-4db3-8efe-973e0db5db30\") " pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.154085 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.178892 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb"] Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.179782 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.182301 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.182731 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.182914 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wjfm8" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.192846 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb"] Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.349313 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63f0d934-32d6-4577-9967-b1bfa261d72a-webhook-cert\") pod \"metallb-operator-webhook-server-8f79c48d5-5xncb\" (UID: \"63f0d934-32d6-4577-9967-b1bfa261d72a\") " pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.349727 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpswh\" (UniqueName: \"kubernetes.io/projected/63f0d934-32d6-4577-9967-b1bfa261d72a-kube-api-access-cpswh\") pod \"metallb-operator-webhook-server-8f79c48d5-5xncb\" (UID: \"63f0d934-32d6-4577-9967-b1bfa261d72a\") " pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.349833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63f0d934-32d6-4577-9967-b1bfa261d72a-apiservice-cert\") pod \"metallb-operator-webhook-server-8f79c48d5-5xncb\" (UID: \"63f0d934-32d6-4577-9967-b1bfa261d72a\") " pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.450771 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63f0d934-32d6-4577-9967-b1bfa261d72a-apiservice-cert\") pod \"metallb-operator-webhook-server-8f79c48d5-5xncb\" (UID: \"63f0d934-32d6-4577-9967-b1bfa261d72a\") " pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.450852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63f0d934-32d6-4577-9967-b1bfa261d72a-webhook-cert\") pod \"metallb-operator-webhook-server-8f79c48d5-5xncb\" (UID: \"63f0d934-32d6-4577-9967-b1bfa261d72a\") " pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.450890 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpswh\" (UniqueName: \"kubernetes.io/projected/63f0d934-32d6-4577-9967-b1bfa261d72a-kube-api-access-cpswh\") pod \"metallb-operator-webhook-server-8f79c48d5-5xncb\" (UID: \"63f0d934-32d6-4577-9967-b1bfa261d72a\") " pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.456764 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/63f0d934-32d6-4577-9967-b1bfa261d72a-apiservice-cert\") pod \"metallb-operator-webhook-server-8f79c48d5-5xncb\" (UID: \"63f0d934-32d6-4577-9967-b1bfa261d72a\") " pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.465280 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/63f0d934-32d6-4577-9967-b1bfa261d72a-webhook-cert\") pod \"metallb-operator-webhook-server-8f79c48d5-5xncb\" (UID: \"63f0d934-32d6-4577-9967-b1bfa261d72a\") " pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.469031 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpswh\" (UniqueName: \"kubernetes.io/projected/63f0d934-32d6-4577-9967-b1bfa261d72a-kube-api-access-cpswh\") pod \"metallb-operator-webhook-server-8f79c48d5-5xncb\" (UID: \"63f0d934-32d6-4577-9967-b1bfa261d72a\") " pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.538782 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:07 crc kubenswrapper[4796]: I0127 07:00:07.612500 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq"] Jan 27 07:00:08 crc kubenswrapper[4796]: I0127 07:00:08.030475 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb"] Jan 27 07:00:08 crc kubenswrapper[4796]: W0127 07:00:08.038335 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63f0d934_32d6_4577_9967_b1bfa261d72a.slice/crio-85ebf954f1ac3e539198312ef0dacd44544288c3fc2102453e3a7be8b7497d3c WatchSource:0}: Error finding container 85ebf954f1ac3e539198312ef0dacd44544288c3fc2102453e3a7be8b7497d3c: Status 404 returned error can't find the container with id 85ebf954f1ac3e539198312ef0dacd44544288c3fc2102453e3a7be8b7497d3c Jan 27 07:00:08 crc kubenswrapper[4796]: I0127 07:00:08.318903 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" event={"ID":"95eec457-204e-4db3-8efe-973e0db5db30","Type":"ContainerStarted","Data":"aeb5e4d00db0dc2efde96367992933e77cae53f43d646defcb062a23c90d64fe"} Jan 27 07:00:08 crc kubenswrapper[4796]: I0127 07:00:08.320015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" event={"ID":"63f0d934-32d6-4577-9967-b1bfa261d72a","Type":"ContainerStarted","Data":"85ebf954f1ac3e539198312ef0dacd44544288c3fc2102453e3a7be8b7497d3c"} Jan 27 07:00:14 crc kubenswrapper[4796]: I0127 07:00:14.358838 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" event={"ID":"95eec457-204e-4db3-8efe-973e0db5db30","Type":"ContainerStarted","Data":"a10a9ce7efbeaa81583b630766cd2a72661fa293169cf3be87aac5a1fadc4344"} Jan 27 07:00:14 crc kubenswrapper[4796]: I0127 07:00:14.359426 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:14 crc kubenswrapper[4796]: I0127 07:00:14.361258 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" event={"ID":"63f0d934-32d6-4577-9967-b1bfa261d72a","Type":"ContainerStarted","Data":"6ee0563535d6be00c493d6419d158511cc9ef80cc5773b0fa1def6fddd6307e2"} Jan 27 07:00:14 crc kubenswrapper[4796]: I0127 07:00:14.361437 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:14 crc kubenswrapper[4796]: I0127 07:00:14.380011 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" podStartSLOduration=2.673173661 podStartE2EDuration="8.379989542s" podCreationTimestamp="2026-01-27 07:00:06 +0000 UTC" firstStartedPulling="2026-01-27 07:00:07.637051443 +0000 UTC m=+828.744018770" lastFinishedPulling="2026-01-27 07:00:13.343867304 +0000 UTC m=+834.450834651" observedRunningTime="2026-01-27 07:00:14.376028942 +0000 UTC m=+835.482996269" watchObservedRunningTime="2026-01-27 07:00:14.379989542 +0000 UTC m=+835.486956889" Jan 27 07:00:14 crc kubenswrapper[4796]: I0127 07:00:14.396606 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" podStartSLOduration=2.078169626 podStartE2EDuration="7.39658512s" podCreationTimestamp="2026-01-27 07:00:07 +0000 UTC" firstStartedPulling="2026-01-27 07:00:08.043890755 +0000 UTC m=+829.150858082" lastFinishedPulling="2026-01-27 07:00:13.362306229 +0000 UTC m=+834.469273576" observedRunningTime="2026-01-27 07:00:14.394662382 +0000 UTC m=+835.501629719" watchObservedRunningTime="2026-01-27 07:00:14.39658512 +0000 UTC m=+835.503552467" Jan 27 07:00:27 crc kubenswrapper[4796]: I0127 07:00:27.542723 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-8f79c48d5-5xncb" Jan 27 07:00:33 crc kubenswrapper[4796]: I0127 07:00:33.788459 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:00:33 crc kubenswrapper[4796]: I0127 07:00:33.788928 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:00:33 crc kubenswrapper[4796]: I0127 07:00:33.788980 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 07:00:33 crc kubenswrapper[4796]: I0127 07:00:33.789529 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c94d4638721f045282b7bf6b0d1a7f76dc81b09a6581078ab42936dd6c8b456"} pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:00:33 crc kubenswrapper[4796]: I0127 07:00:33.789596 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" containerID="cri-o://2c94d4638721f045282b7bf6b0d1a7f76dc81b09a6581078ab42936dd6c8b456" gracePeriod=600 Jan 27 07:00:34 crc kubenswrapper[4796]: I0127 07:00:34.490138 4796 generic.go:334] "Generic (PLEG): container finished" podID="84d7512b-555d-440a-b817-deb8ba12f61d" containerID="2c94d4638721f045282b7bf6b0d1a7f76dc81b09a6581078ab42936dd6c8b456" exitCode=0 Jan 27 07:00:34 crc kubenswrapper[4796]: I0127 07:00:34.490212 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerDied","Data":"2c94d4638721f045282b7bf6b0d1a7f76dc81b09a6581078ab42936dd6c8b456"} Jan 27 07:00:34 crc kubenswrapper[4796]: I0127 07:00:34.490480 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"5279cebf05cbd111eece26525bf47f8c49c929ff7313221c4f6857c9246306cd"} Jan 27 07:00:34 crc kubenswrapper[4796]: I0127 07:00:34.490500 4796 scope.go:117] "RemoveContainer" containerID="dfa6b1db554a19ec28fc2fce13b6e36d08d4e2a69c60abcaaae99832b0c71be3" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.157114 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-549fb8c6d4-w79rq" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.870996 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj"] Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.871918 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.877230 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lhqk2" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.877467 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.881113 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rhh26"] Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.883745 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.885554 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.885566 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.895923 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj"] Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.974665 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7dwl5"] Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.975706 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7dwl5" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.979006 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.979061 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.979140 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.979905 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rn64r" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.992913 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh84q\" (UniqueName: \"kubernetes.io/projected/29719030-ae0e-4e4d-8932-9e7edc2f1b1f-kube-api-access-lh84q\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkdpj\" (UID: \"29719030-ae0e-4e4d-8932-9e7edc2f1b1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.992977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-reloader\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.993009 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-metrics-certs\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.993037 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-frr-conf\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.993853 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-metrics\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.993899 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfnmc\" (UniqueName: \"kubernetes.io/projected/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-kube-api-access-vfnmc\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.993974 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-frr-sockets\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.994080 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29719030-ae0e-4e4d-8932-9e7edc2f1b1f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkdpj\" (UID: \"29719030-ae0e-4e4d-8932-9e7edc2f1b1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:47 crc kubenswrapper[4796]: I0127 07:00:47.994126 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-frr-startup\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.009744 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-dlvdr"] Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.010882 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.021584 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dlvdr"] Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.022909 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096196 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-memberlist\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096331 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-metrics\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096361 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx7c5\" (UniqueName: \"kubernetes.io/projected/9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9-kube-api-access-zx7c5\") pod \"controller-6968d8fdc4-dlvdr\" (UID: \"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9\") " pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096377 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7xtp\" (UniqueName: \"kubernetes.io/projected/b831a3a4-784d-47c8-bb07-e4f40445f066-kube-api-access-p7xtp\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096395 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfnmc\" (UniqueName: \"kubernetes.io/projected/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-kube-api-access-vfnmc\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096422 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-frr-sockets\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096483 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b831a3a4-784d-47c8-bb07-e4f40445f066-metallb-excludel2\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29719030-ae0e-4e4d-8932-9e7edc2f1b1f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkdpj\" (UID: \"29719030-ae0e-4e4d-8932-9e7edc2f1b1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096556 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9-metrics-certs\") pod \"controller-6968d8fdc4-dlvdr\" (UID: \"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9\") " pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096582 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9-cert\") pod \"controller-6968d8fdc4-dlvdr\" (UID: \"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9\") " pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096608 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-frr-startup\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-metrics-certs\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096693 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh84q\" (UniqueName: \"kubernetes.io/projected/29719030-ae0e-4e4d-8932-9e7edc2f1b1f-kube-api-access-lh84q\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkdpj\" (UID: \"29719030-ae0e-4e4d-8932-9e7edc2f1b1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.096723 4796 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096739 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-reloader\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.096814 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29719030-ae0e-4e4d-8932-9e7edc2f1b1f-cert podName:29719030-ae0e-4e4d-8932-9e7edc2f1b1f nodeName:}" failed. No retries permitted until 2026-01-27 07:00:48.59679084 +0000 UTC m=+869.703758257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/29719030-ae0e-4e4d-8932-9e7edc2f1b1f-cert") pod "frr-k8s-webhook-server-7df86c4f6c-bkdpj" (UID: "29719030-ae0e-4e4d-8932-9e7edc2f1b1f") : secret "frr-k8s-webhook-server-cert" not found Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-metrics-certs\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.096905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-frr-conf\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.096972 4796 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.097012 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-metrics-certs podName:28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c nodeName:}" failed. No retries permitted until 2026-01-27 07:00:48.596998195 +0000 UTC m=+869.703965522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-metrics-certs") pod "frr-k8s-rhh26" (UID: "28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c") : secret "frr-k8s-certs-secret" not found Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.097133 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-metrics\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.097199 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-frr-sockets\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.097205 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-reloader\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.097236 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-frr-conf\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.097519 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-frr-startup\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.119063 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfnmc\" (UniqueName: \"kubernetes.io/projected/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-kube-api-access-vfnmc\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.123207 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh84q\" (UniqueName: \"kubernetes.io/projected/29719030-ae0e-4e4d-8932-9e7edc2f1b1f-kube-api-access-lh84q\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkdpj\" (UID: \"29719030-ae0e-4e4d-8932-9e7edc2f1b1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.197482 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b831a3a4-784d-47c8-bb07-e4f40445f066-metallb-excludel2\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.197561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9-metrics-certs\") pod \"controller-6968d8fdc4-dlvdr\" (UID: \"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9\") " pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.197580 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9-cert\") pod \"controller-6968d8fdc4-dlvdr\" (UID: \"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9\") " pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.197604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-metrics-certs\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.197652 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-memberlist\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.197683 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx7c5\" (UniqueName: \"kubernetes.io/projected/9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9-kube-api-access-zx7c5\") pod \"controller-6968d8fdc4-dlvdr\" (UID: \"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9\") " pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.197698 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7xtp\" (UniqueName: \"kubernetes.io/projected/b831a3a4-784d-47c8-bb07-e4f40445f066-kube-api-access-p7xtp\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.197819 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.197887 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-memberlist podName:b831a3a4-784d-47c8-bb07-e4f40445f066 nodeName:}" failed. No retries permitted until 2026-01-27 07:00:48.697871649 +0000 UTC m=+869.804838976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-memberlist") pod "speaker-7dwl5" (UID: "b831a3a4-784d-47c8-bb07-e4f40445f066") : secret "metallb-memberlist" not found Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.198160 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b831a3a4-784d-47c8-bb07-e4f40445f066-metallb-excludel2\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.198206 4796 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.198270 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-metrics-certs podName:b831a3a4-784d-47c8-bb07-e4f40445f066 nodeName:}" failed. No retries permitted until 2026-01-27 07:00:48.698253859 +0000 UTC m=+869.805221186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-metrics-certs") pod "speaker-7dwl5" (UID: "b831a3a4-784d-47c8-bb07-e4f40445f066") : secret "speaker-certs-secret" not found Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.202353 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9-metrics-certs\") pod \"controller-6968d8fdc4-dlvdr\" (UID: \"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9\") " pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.203606 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.214088 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9-cert\") pod \"controller-6968d8fdc4-dlvdr\" (UID: \"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9\") " pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.220915 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7xtp\" (UniqueName: \"kubernetes.io/projected/b831a3a4-784d-47c8-bb07-e4f40445f066-kube-api-access-p7xtp\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.221463 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx7c5\" (UniqueName: \"kubernetes.io/projected/9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9-kube-api-access-zx7c5\") pod \"controller-6968d8fdc4-dlvdr\" (UID: \"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9\") " pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.324504 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.588190 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dlvdr"] Jan 27 07:00:48 crc kubenswrapper[4796]: W0127 07:00:48.592314 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f97c3a8_70cb_4ee7_ab99_25430a2e3bc9.slice/crio-d97396a118568ff59df0bf8791e47169c6c0416e7ea625be57471074cd9a56a2 WatchSource:0}: Error finding container d97396a118568ff59df0bf8791e47169c6c0416e7ea625be57471074cd9a56a2: Status 404 returned error can't find the container with id d97396a118568ff59df0bf8791e47169c6c0416e7ea625be57471074cd9a56a2 Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.603915 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29719030-ae0e-4e4d-8932-9e7edc2f1b1f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkdpj\" (UID: \"29719030-ae0e-4e4d-8932-9e7edc2f1b1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.604029 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-metrics-certs\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.612114 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/29719030-ae0e-4e4d-8932-9e7edc2f1b1f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bkdpj\" (UID: \"29719030-ae0e-4e4d-8932-9e7edc2f1b1f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.612927 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c-metrics-certs\") pod \"frr-k8s-rhh26\" (UID: \"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c\") " pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.614031 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dlvdr" event={"ID":"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9","Type":"ContainerStarted","Data":"d97396a118568ff59df0bf8791e47169c6c0416e7ea625be57471074cd9a56a2"} Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.705997 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-memberlist\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.706180 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 27 07:00:48 crc kubenswrapper[4796]: E0127 07:00:48.706723 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-memberlist podName:b831a3a4-784d-47c8-bb07-e4f40445f066 nodeName:}" failed. No retries permitted until 2026-01-27 07:00:49.706704845 +0000 UTC m=+870.813672162 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-memberlist") pod "speaker-7dwl5" (UID: "b831a3a4-784d-47c8-bb07-e4f40445f066") : secret "metallb-memberlist" not found Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.706661 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-metrics-certs\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.714725 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-metrics-certs\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.790213 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:48 crc kubenswrapper[4796]: I0127 07:00:48.803907 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rhh26" Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.034665 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj"] Jan 27 07:00:49 crc kubenswrapper[4796]: W0127 07:00:49.043048 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29719030_ae0e_4e4d_8932_9e7edc2f1b1f.slice/crio-cac8f436baeaa0049185da400b4c4e14c766d290adebe1c9697a89199e753e04 WatchSource:0}: Error finding container cac8f436baeaa0049185da400b4c4e14c766d290adebe1c9697a89199e753e04: Status 404 returned error can't find the container with id cac8f436baeaa0049185da400b4c4e14c766d290adebe1c9697a89199e753e04 Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.621483 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" event={"ID":"29719030-ae0e-4e4d-8932-9e7edc2f1b1f","Type":"ContainerStarted","Data":"cac8f436baeaa0049185da400b4c4e14c766d290adebe1c9697a89199e753e04"} Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.622928 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerStarted","Data":"85d6808c5b0644d8fb87cab2a5f9093b2508aec3209866a40baa8763577b5b67"} Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.625053 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dlvdr" event={"ID":"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9","Type":"ContainerStarted","Data":"4073b3db229b996e165216cb769af29755c787810343a3bc0374e2ed8af9e5c5"} Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.625103 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dlvdr" event={"ID":"9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9","Type":"ContainerStarted","Data":"3c5467bfa1be9dc1cd127bbd1217d15f772d1b58578b9e354c9081142e41b7fb"} Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.625213 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.643808 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-dlvdr" podStartSLOduration=2.643788564 podStartE2EDuration="2.643788564s" podCreationTimestamp="2026-01-27 07:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:00:49.638836449 +0000 UTC m=+870.745803776" watchObservedRunningTime="2026-01-27 07:00:49.643788564 +0000 UTC m=+870.750755891" Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.720366 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-memberlist\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.728148 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b831a3a4-784d-47c8-bb07-e4f40445f066-memberlist\") pod \"speaker-7dwl5\" (UID: \"b831a3a4-784d-47c8-bb07-e4f40445f066\") " pod="metallb-system/speaker-7dwl5" Jan 27 07:00:49 crc kubenswrapper[4796]: I0127 07:00:49.790236 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7dwl5" Jan 27 07:00:49 crc kubenswrapper[4796]: W0127 07:00:49.836095 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb831a3a4_784d_47c8_bb07_e4f40445f066.slice/crio-e75b328e08c32eb8b0651ac5573dcbb52ed27d6366836057579f66193b356b7f WatchSource:0}: Error finding container e75b328e08c32eb8b0651ac5573dcbb52ed27d6366836057579f66193b356b7f: Status 404 returned error can't find the container with id e75b328e08c32eb8b0651ac5573dcbb52ed27d6366836057579f66193b356b7f Jan 27 07:00:50 crc kubenswrapper[4796]: I0127 07:00:50.642086 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7dwl5" event={"ID":"b831a3a4-784d-47c8-bb07-e4f40445f066","Type":"ContainerStarted","Data":"d7f7422edae4a4391aa25659f74d836b3fa0145bd445958a9de8e3d7599a7b9a"} Jan 27 07:00:50 crc kubenswrapper[4796]: I0127 07:00:50.642411 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7dwl5" event={"ID":"b831a3a4-784d-47c8-bb07-e4f40445f066","Type":"ContainerStarted","Data":"46338afbb0afb3b103463168b90e4e434a2184255121d882eb930a71a3cdd7da"} Jan 27 07:00:50 crc kubenswrapper[4796]: I0127 07:00:50.642428 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7dwl5" event={"ID":"b831a3a4-784d-47c8-bb07-e4f40445f066","Type":"ContainerStarted","Data":"e75b328e08c32eb8b0651ac5573dcbb52ed27d6366836057579f66193b356b7f"} Jan 27 07:00:50 crc kubenswrapper[4796]: I0127 07:00:50.642686 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7dwl5" Jan 27 07:00:50 crc kubenswrapper[4796]: I0127 07:00:50.661657 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7dwl5" podStartSLOduration=3.66164116 podStartE2EDuration="3.66164116s" podCreationTimestamp="2026-01-27 07:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:00:50.661409375 +0000 UTC m=+871.768376712" watchObservedRunningTime="2026-01-27 07:00:50.66164116 +0000 UTC m=+871.768608487" Jan 27 07:00:57 crc kubenswrapper[4796]: I0127 07:00:57.691121 4796 generic.go:334] "Generic (PLEG): container finished" podID="28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c" containerID="bb1bde31ffa22a721529d6e76cc0207e985ac281c2dfd85d6663adfc488ff3d8" exitCode=0 Jan 27 07:00:57 crc kubenswrapper[4796]: I0127 07:00:57.691225 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerDied","Data":"bb1bde31ffa22a721529d6e76cc0207e985ac281c2dfd85d6663adfc488ff3d8"} Jan 27 07:00:57 crc kubenswrapper[4796]: I0127 07:00:57.693822 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" event={"ID":"29719030-ae0e-4e4d-8932-9e7edc2f1b1f","Type":"ContainerStarted","Data":"aa40912b9d63129d86f104f4a8f9dd67810855fedbe67e6e2ae2efcf43987924"} Jan 27 07:00:57 crc kubenswrapper[4796]: I0127 07:00:57.693977 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:00:57 crc kubenswrapper[4796]: I0127 07:00:57.742107 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" podStartSLOduration=2.995361382 podStartE2EDuration="10.742089373s" podCreationTimestamp="2026-01-27 07:00:47 +0000 UTC" firstStartedPulling="2026-01-27 07:00:49.046255781 +0000 UTC m=+870.153223108" lastFinishedPulling="2026-01-27 07:00:56.792983772 +0000 UTC m=+877.899951099" observedRunningTime="2026-01-27 07:00:57.736851472 +0000 UTC m=+878.843818799" watchObservedRunningTime="2026-01-27 07:00:57.742089373 +0000 UTC m=+878.849056700" Jan 27 07:00:58 crc kubenswrapper[4796]: I0127 07:00:58.329893 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-dlvdr" Jan 27 07:00:58 crc kubenswrapper[4796]: I0127 07:00:58.700779 4796 generic.go:334] "Generic (PLEG): container finished" podID="28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c" containerID="3525f30d4f299be3a056b982cdb31a52d11e378799fa9a0081b0be09817bc944" exitCode=0 Jan 27 07:00:58 crc kubenswrapper[4796]: I0127 07:00:58.700913 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerDied","Data":"3525f30d4f299be3a056b982cdb31a52d11e378799fa9a0081b0be09817bc944"} Jan 27 07:00:59 crc kubenswrapper[4796]: I0127 07:00:59.708959 4796 generic.go:334] "Generic (PLEG): container finished" podID="28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c" containerID="8252d3b9444f1db8642954bcd7d6e107ae0ff0c9ee5696f411c436485026f139" exitCode=0 Jan 27 07:00:59 crc kubenswrapper[4796]: I0127 07:00:59.709085 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerDied","Data":"8252d3b9444f1db8642954bcd7d6e107ae0ff0c9ee5696f411c436485026f139"} Jan 27 07:01:00 crc kubenswrapper[4796]: I0127 07:01:00.718925 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerStarted","Data":"3a98b26d3407a9a7e332bddf740c8237b08b75b59ffaf1584707465643b8320a"} Jan 27 07:01:00 crc kubenswrapper[4796]: I0127 07:01:00.718970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerStarted","Data":"1253f8dd3c25962ba00a97d5aaab500168f64239bf3a16c5e68a6d8c2e0eb8f3"} Jan 27 07:01:00 crc kubenswrapper[4796]: I0127 07:01:00.718982 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerStarted","Data":"39a0f3456fb9c23f57db65238afaecdc58c9c800b53bbe4c49a3d58138eafdd1"} Jan 27 07:01:00 crc kubenswrapper[4796]: I0127 07:01:00.718991 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerStarted","Data":"f8c5daaae31a2bfe27843b3b735776a85155fbc4e0a23b0942d74effd4469fa8"} Jan 27 07:01:00 crc kubenswrapper[4796]: I0127 07:01:00.719000 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerStarted","Data":"b8f3c1859195cbb9546c9bd3833ed9cbb7db6029db0c11f724e2204f55ddaa44"} Jan 27 07:01:01 crc kubenswrapper[4796]: I0127 07:01:01.733003 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rhh26" event={"ID":"28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c","Type":"ContainerStarted","Data":"b2acdb3cb403c309edf9beba8e795e3fc7d905ba1de6fde020963138f8d89695"} Jan 27 07:01:01 crc kubenswrapper[4796]: I0127 07:01:01.733431 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rhh26" Jan 27 07:01:01 crc kubenswrapper[4796]: I0127 07:01:01.763901 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rhh26" podStartSLOduration=7.06480149 podStartE2EDuration="14.763869218s" podCreationTimestamp="2026-01-27 07:00:47 +0000 UTC" firstStartedPulling="2026-01-27 07:00:49.076229057 +0000 UTC m=+870.183196384" lastFinishedPulling="2026-01-27 07:00:56.775296745 +0000 UTC m=+877.882264112" observedRunningTime="2026-01-27 07:01:01.755954308 +0000 UTC m=+882.862921645" watchObservedRunningTime="2026-01-27 07:01:01.763869218 +0000 UTC m=+882.870836585" Jan 27 07:01:03 crc kubenswrapper[4796]: I0127 07:01:03.804140 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rhh26" Jan 27 07:01:03 crc kubenswrapper[4796]: I0127 07:01:03.842061 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rhh26" Jan 27 07:01:08 crc kubenswrapper[4796]: I0127 07:01:08.795292 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bkdpj" Jan 27 07:01:09 crc kubenswrapper[4796]: I0127 07:01:09.798876 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7dwl5" Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.658632 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vws26"] Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.659830 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vws26" Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.667607 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.667602 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.667609 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-9vts2" Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.668868 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vws26"] Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.748713 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsw2m\" (UniqueName: \"kubernetes.io/projected/a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb-kube-api-access-qsw2m\") pod \"openstack-operator-index-vws26\" (UID: \"a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb\") " pod="openstack-operators/openstack-operator-index-vws26" Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.849837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsw2m\" (UniqueName: \"kubernetes.io/projected/a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb-kube-api-access-qsw2m\") pod \"openstack-operator-index-vws26\" (UID: \"a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb\") " pod="openstack-operators/openstack-operator-index-vws26" Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.868871 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsw2m\" (UniqueName: \"kubernetes.io/projected/a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb-kube-api-access-qsw2m\") pod \"openstack-operator-index-vws26\" (UID: \"a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb\") " pod="openstack-operators/openstack-operator-index-vws26" Jan 27 07:01:12 crc kubenswrapper[4796]: I0127 07:01:12.988652 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vws26" Jan 27 07:01:13 crc kubenswrapper[4796]: I0127 07:01:13.414854 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vws26"] Jan 27 07:01:13 crc kubenswrapper[4796]: I0127 07:01:13.819373 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vws26" event={"ID":"a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb","Type":"ContainerStarted","Data":"19803b831563b48e35ec0824784e182502df062c2d661f85f697a77b2a21580a"} Jan 27 07:01:16 crc kubenswrapper[4796]: I0127 07:01:16.626494 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vws26"] Jan 27 07:01:16 crc kubenswrapper[4796]: I0127 07:01:16.839717 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vws26" event={"ID":"a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb","Type":"ContainerStarted","Data":"e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8"} Jan 27 07:01:16 crc kubenswrapper[4796]: I0127 07:01:16.858984 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vws26" podStartSLOduration=2.258713525 podStartE2EDuration="4.858967929s" podCreationTimestamp="2026-01-27 07:01:12 +0000 UTC" firstStartedPulling="2026-01-27 07:01:13.427124509 +0000 UTC m=+894.534091846" lastFinishedPulling="2026-01-27 07:01:16.027378923 +0000 UTC m=+897.134346250" observedRunningTime="2026-01-27 07:01:16.853606644 +0000 UTC m=+897.960573991" watchObservedRunningTime="2026-01-27 07:01:16.858967929 +0000 UTC m=+897.965935266" Jan 27 07:01:17 crc kubenswrapper[4796]: I0127 07:01:17.436358 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2f5gg"] Jan 27 07:01:17 crc kubenswrapper[4796]: I0127 07:01:17.437051 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2f5gg" Jan 27 07:01:17 crc kubenswrapper[4796]: I0127 07:01:17.444467 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2f5gg"] Jan 27 07:01:17 crc kubenswrapper[4796]: I0127 07:01:17.613247 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt4wt\" (UniqueName: \"kubernetes.io/projected/507ccf39-7d25-425a-a0b0-8381ab9b7562-kube-api-access-mt4wt\") pod \"openstack-operator-index-2f5gg\" (UID: \"507ccf39-7d25-425a-a0b0-8381ab9b7562\") " pod="openstack-operators/openstack-operator-index-2f5gg" Jan 27 07:01:17 crc kubenswrapper[4796]: I0127 07:01:17.714737 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt4wt\" (UniqueName: \"kubernetes.io/projected/507ccf39-7d25-425a-a0b0-8381ab9b7562-kube-api-access-mt4wt\") pod \"openstack-operator-index-2f5gg\" (UID: \"507ccf39-7d25-425a-a0b0-8381ab9b7562\") " pod="openstack-operators/openstack-operator-index-2f5gg" Jan 27 07:01:17 crc kubenswrapper[4796]: I0127 07:01:17.740718 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt4wt\" (UniqueName: \"kubernetes.io/projected/507ccf39-7d25-425a-a0b0-8381ab9b7562-kube-api-access-mt4wt\") pod \"openstack-operator-index-2f5gg\" (UID: \"507ccf39-7d25-425a-a0b0-8381ab9b7562\") " pod="openstack-operators/openstack-operator-index-2f5gg" Jan 27 07:01:17 crc kubenswrapper[4796]: I0127 07:01:17.768906 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2f5gg" Jan 27 07:01:17 crc kubenswrapper[4796]: I0127 07:01:17.845709 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vws26" podUID="a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb" containerName="registry-server" containerID="cri-o://e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8" gracePeriod=2 Jan 27 07:01:17 crc kubenswrapper[4796]: I0127 07:01:17.959006 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2f5gg"] Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.177758 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vws26" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.324741 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsw2m\" (UniqueName: \"kubernetes.io/projected/a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb-kube-api-access-qsw2m\") pod \"a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb\" (UID: \"a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb\") " Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.333620 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb-kube-api-access-qsw2m" (OuterVolumeSpecName: "kube-api-access-qsw2m") pod "a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb" (UID: "a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb"). InnerVolumeSpecName "kube-api-access-qsw2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.426875 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsw2m\" (UniqueName: \"kubernetes.io/projected/a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb-kube-api-access-qsw2m\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.646708 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-48zkg"] Jan 27 07:01:18 crc kubenswrapper[4796]: E0127 07:01:18.647586 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb" containerName="registry-server" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.647630 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb" containerName="registry-server" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.647935 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb" containerName="registry-server" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.650345 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.663640 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48zkg"] Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.809459 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rhh26" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.832758 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v9qh\" (UniqueName: \"kubernetes.io/projected/cd081b54-58a8-4bca-bf8e-091f67a9da36-kube-api-access-2v9qh\") pod \"redhat-marketplace-48zkg\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.832821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-catalog-content\") pod \"redhat-marketplace-48zkg\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.832851 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-utilities\") pod \"redhat-marketplace-48zkg\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.854362 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2f5gg" event={"ID":"507ccf39-7d25-425a-a0b0-8381ab9b7562","Type":"ContainerStarted","Data":"b464d34b8d3b76150b22f8f0ed9ea4261efc9d8dd7e0874639ec58023e99871c"} Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.854425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2f5gg" event={"ID":"507ccf39-7d25-425a-a0b0-8381ab9b7562","Type":"ContainerStarted","Data":"89a85f085bb4e09cb529b859878d87eecc5d14caef22861f1dd2cc92defa6e68"} Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.855745 4796 generic.go:334] "Generic (PLEG): container finished" podID="a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb" containerID="e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8" exitCode=0 Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.855797 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vws26" event={"ID":"a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb","Type":"ContainerDied","Data":"e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8"} Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.855804 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vws26" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.855836 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vws26" event={"ID":"a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb","Type":"ContainerDied","Data":"19803b831563b48e35ec0824784e182502df062c2d661f85f697a77b2a21580a"} Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.855858 4796 scope.go:117] "RemoveContainer" containerID="e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.873559 4796 scope.go:117] "RemoveContainer" containerID="e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8" Jan 27 07:01:18 crc kubenswrapper[4796]: E0127 07:01:18.879708 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8\": container with ID starting with e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8 not found: ID does not exist" containerID="e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.879760 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8"} err="failed to get container status \"e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8\": rpc error: code = NotFound desc = could not find container \"e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8\": container with ID starting with e5f3bd71ce3e046962faaadedefca89c5eab1dbcd8e8304ec702789cf3b914e8 not found: ID does not exist" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.893812 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2f5gg" podStartSLOduration=1.825592189 podStartE2EDuration="1.893791369s" podCreationTimestamp="2026-01-27 07:01:17 +0000 UTC" firstStartedPulling="2026-01-27 07:01:17.967942874 +0000 UTC m=+899.074910221" lastFinishedPulling="2026-01-27 07:01:18.036142034 +0000 UTC m=+899.143109401" observedRunningTime="2026-01-27 07:01:18.891739338 +0000 UTC m=+899.998706665" watchObservedRunningTime="2026-01-27 07:01:18.893791369 +0000 UTC m=+900.000758696" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.915602 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vws26"] Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.929410 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vws26"] Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.937174 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v9qh\" (UniqueName: \"kubernetes.io/projected/cd081b54-58a8-4bca-bf8e-091f67a9da36-kube-api-access-2v9qh\") pod \"redhat-marketplace-48zkg\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.937224 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-catalog-content\") pod \"redhat-marketplace-48zkg\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.937249 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-utilities\") pod \"redhat-marketplace-48zkg\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.940243 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-utilities\") pod \"redhat-marketplace-48zkg\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.940320 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-catalog-content\") pod \"redhat-marketplace-48zkg\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.959284 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v9qh\" (UniqueName: \"kubernetes.io/projected/cd081b54-58a8-4bca-bf8e-091f67a9da36-kube-api-access-2v9qh\") pod \"redhat-marketplace-48zkg\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:18 crc kubenswrapper[4796]: I0127 07:01:18.976349 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:19 crc kubenswrapper[4796]: I0127 07:01:19.181977 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-48zkg"] Jan 27 07:01:19 crc kubenswrapper[4796]: I0127 07:01:19.869502 4796 generic.go:334] "Generic (PLEG): container finished" podID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerID="2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5" exitCode=0 Jan 27 07:01:19 crc kubenswrapper[4796]: I0127 07:01:19.869585 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48zkg" event={"ID":"cd081b54-58a8-4bca-bf8e-091f67a9da36","Type":"ContainerDied","Data":"2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5"} Jan 27 07:01:19 crc kubenswrapper[4796]: I0127 07:01:19.870065 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48zkg" event={"ID":"cd081b54-58a8-4bca-bf8e-091f67a9da36","Type":"ContainerStarted","Data":"e6afde595bf8f760c0426c559cfba4d5a2aa28a4516bd2f62ef64816e9a064dc"} Jan 27 07:01:20 crc kubenswrapper[4796]: I0127 07:01:20.760040 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb" path="/var/lib/kubelet/pods/a8fec8b4-98a4-4d7c-a7d3-30d2e714f4fb/volumes" Jan 27 07:01:20 crc kubenswrapper[4796]: I0127 07:01:20.883451 4796 generic.go:334] "Generic (PLEG): container finished" podID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerID="2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8" exitCode=0 Jan 27 07:01:20 crc kubenswrapper[4796]: I0127 07:01:20.883579 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48zkg" event={"ID":"cd081b54-58a8-4bca-bf8e-091f67a9da36","Type":"ContainerDied","Data":"2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8"} Jan 27 07:01:21 crc kubenswrapper[4796]: I0127 07:01:21.895280 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48zkg" event={"ID":"cd081b54-58a8-4bca-bf8e-091f67a9da36","Type":"ContainerStarted","Data":"0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504"} Jan 27 07:01:21 crc kubenswrapper[4796]: I0127 07:01:21.922961 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-48zkg" podStartSLOduration=2.4755663820000002 podStartE2EDuration="3.922947664s" podCreationTimestamp="2026-01-27 07:01:18 +0000 UTC" firstStartedPulling="2026-01-27 07:01:19.872587151 +0000 UTC m=+900.979554478" lastFinishedPulling="2026-01-27 07:01:21.319968393 +0000 UTC m=+902.426935760" observedRunningTime="2026-01-27 07:01:21.920619215 +0000 UTC m=+903.027586582" watchObservedRunningTime="2026-01-27 07:01:21.922947664 +0000 UTC m=+903.029914991" Jan 27 07:01:27 crc kubenswrapper[4796]: I0127 07:01:27.769334 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2f5gg" Jan 27 07:01:27 crc kubenswrapper[4796]: I0127 07:01:27.770083 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2f5gg" Jan 27 07:01:27 crc kubenswrapper[4796]: I0127 07:01:27.808951 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2f5gg" Jan 27 07:01:27 crc kubenswrapper[4796]: I0127 07:01:27.965852 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2f5gg" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.881047 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5"] Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.883688 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.886283 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xsbr9" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.894187 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5"] Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.895276 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb2wd\" (UniqueName: \"kubernetes.io/projected/e6c83c32-76cc-4afb-a71f-02a66a616302-kube-api-access-qb2wd\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.895355 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-bundle\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.895629 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-util\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.976924 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.977308 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.996844 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb2wd\" (UniqueName: \"kubernetes.io/projected/e6c83c32-76cc-4afb-a71f-02a66a616302-kube-api-access-qb2wd\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.996948 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-bundle\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.997072 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-util\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.997823 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-bundle\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:28 crc kubenswrapper[4796]: I0127 07:01:28.997858 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-util\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:29 crc kubenswrapper[4796]: I0127 07:01:29.020932 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb2wd\" (UniqueName: \"kubernetes.io/projected/e6c83c32-76cc-4afb-a71f-02a66a616302-kube-api-access-qb2wd\") pod \"5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:29 crc kubenswrapper[4796]: I0127 07:01:29.034034 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:29 crc kubenswrapper[4796]: I0127 07:01:29.210617 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:29 crc kubenswrapper[4796]: I0127 07:01:29.726772 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5"] Jan 27 07:01:29 crc kubenswrapper[4796]: I0127 07:01:29.963897 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" event={"ID":"e6c83c32-76cc-4afb-a71f-02a66a616302","Type":"ContainerStarted","Data":"2b434d5e4db7699813afabdd4e5a5ee67cf5fd42d99e31aef5bd31c1db35f8be"} Jan 27 07:01:29 crc kubenswrapper[4796]: I0127 07:01:29.963947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" event={"ID":"e6c83c32-76cc-4afb-a71f-02a66a616302","Type":"ContainerStarted","Data":"addcd6d98b0a563e4a89fc112c24772b708584e01c40232569e9b4b4a4896e09"} Jan 27 07:01:30 crc kubenswrapper[4796]: I0127 07:01:30.035644 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:30 crc kubenswrapper[4796]: I0127 07:01:30.834006 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48zkg"] Jan 27 07:01:30 crc kubenswrapper[4796]: I0127 07:01:30.971618 4796 generic.go:334] "Generic (PLEG): container finished" podID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerID="2b434d5e4db7699813afabdd4e5a5ee67cf5fd42d99e31aef5bd31c1db35f8be" exitCode=0 Jan 27 07:01:30 crc kubenswrapper[4796]: I0127 07:01:30.971739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" event={"ID":"e6c83c32-76cc-4afb-a71f-02a66a616302","Type":"ContainerDied","Data":"2b434d5e4db7699813afabdd4e5a5ee67cf5fd42d99e31aef5bd31c1db35f8be"} Jan 27 07:01:31 crc kubenswrapper[4796]: I0127 07:01:31.980132 4796 generic.go:334] "Generic (PLEG): container finished" podID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerID="7a4355a8d70b6550aa0f01003b834ea4f0360a9b32cb4c2a53dae67842bb1f2e" exitCode=0 Jan 27 07:01:31 crc kubenswrapper[4796]: I0127 07:01:31.980191 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" event={"ID":"e6c83c32-76cc-4afb-a71f-02a66a616302","Type":"ContainerDied","Data":"7a4355a8d70b6550aa0f01003b834ea4f0360a9b32cb4c2a53dae67842bb1f2e"} Jan 27 07:01:31 crc kubenswrapper[4796]: I0127 07:01:31.980942 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-48zkg" podUID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerName="registry-server" containerID="cri-o://0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504" gracePeriod=2 Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.442568 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.446642 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-catalog-content\") pod \"cd081b54-58a8-4bca-bf8e-091f67a9da36\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.446723 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v9qh\" (UniqueName: \"kubernetes.io/projected/cd081b54-58a8-4bca-bf8e-091f67a9da36-kube-api-access-2v9qh\") pod \"cd081b54-58a8-4bca-bf8e-091f67a9da36\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.446852 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-utilities\") pod \"cd081b54-58a8-4bca-bf8e-091f67a9da36\" (UID: \"cd081b54-58a8-4bca-bf8e-091f67a9da36\") " Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.447630 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-utilities" (OuterVolumeSpecName: "utilities") pod "cd081b54-58a8-4bca-bf8e-091f67a9da36" (UID: "cd081b54-58a8-4bca-bf8e-091f67a9da36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.452867 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd081b54-58a8-4bca-bf8e-091f67a9da36-kube-api-access-2v9qh" (OuterVolumeSpecName: "kube-api-access-2v9qh") pod "cd081b54-58a8-4bca-bf8e-091f67a9da36" (UID: "cd081b54-58a8-4bca-bf8e-091f67a9da36"). InnerVolumeSpecName "kube-api-access-2v9qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.474424 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd081b54-58a8-4bca-bf8e-091f67a9da36" (UID: "cd081b54-58a8-4bca-bf8e-091f67a9da36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.548731 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.548767 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd081b54-58a8-4bca-bf8e-091f67a9da36-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.548781 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v9qh\" (UniqueName: \"kubernetes.io/projected/cd081b54-58a8-4bca-bf8e-091f67a9da36-kube-api-access-2v9qh\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.838207 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7gnq6"] Jan 27 07:01:32 crc kubenswrapper[4796]: E0127 07:01:32.838917 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerName="extract-content" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.838944 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerName="extract-content" Jan 27 07:01:32 crc kubenswrapper[4796]: E0127 07:01:32.838960 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerName="extract-utilities" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.838968 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerName="extract-utilities" Jan 27 07:01:32 crc kubenswrapper[4796]: E0127 07:01:32.838994 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerName="registry-server" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.839002 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerName="registry-server" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.839150 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerName="registry-server" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.841478 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.848375 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gnq6"] Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.850735 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vbfg\" (UniqueName: \"kubernetes.io/projected/2edf3743-0387-4779-b77c-38f486e3eb2d-kube-api-access-5vbfg\") pod \"certified-operators-7gnq6\" (UID: \"2edf3743-0387-4779-b77c-38f486e3eb2d\") " pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.850788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2edf3743-0387-4779-b77c-38f486e3eb2d-catalog-content\") pod \"certified-operators-7gnq6\" (UID: \"2edf3743-0387-4779-b77c-38f486e3eb2d\") " pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.850812 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2edf3743-0387-4779-b77c-38f486e3eb2d-utilities\") pod \"certified-operators-7gnq6\" (UID: \"2edf3743-0387-4779-b77c-38f486e3eb2d\") " pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.951476 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2edf3743-0387-4779-b77c-38f486e3eb2d-utilities\") pod \"certified-operators-7gnq6\" (UID: \"2edf3743-0387-4779-b77c-38f486e3eb2d\") " pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.951596 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vbfg\" (UniqueName: \"kubernetes.io/projected/2edf3743-0387-4779-b77c-38f486e3eb2d-kube-api-access-5vbfg\") pod \"certified-operators-7gnq6\" (UID: \"2edf3743-0387-4779-b77c-38f486e3eb2d\") " pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.951626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2edf3743-0387-4779-b77c-38f486e3eb2d-catalog-content\") pod \"certified-operators-7gnq6\" (UID: \"2edf3743-0387-4779-b77c-38f486e3eb2d\") " pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.952315 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2edf3743-0387-4779-b77c-38f486e3eb2d-catalog-content\") pod \"certified-operators-7gnq6\" (UID: \"2edf3743-0387-4779-b77c-38f486e3eb2d\") " pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.953664 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2edf3743-0387-4779-b77c-38f486e3eb2d-utilities\") pod \"certified-operators-7gnq6\" (UID: \"2edf3743-0387-4779-b77c-38f486e3eb2d\") " pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.975102 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vbfg\" (UniqueName: \"kubernetes.io/projected/2edf3743-0387-4779-b77c-38f486e3eb2d-kube-api-access-5vbfg\") pod \"certified-operators-7gnq6\" (UID: \"2edf3743-0387-4779-b77c-38f486e3eb2d\") " pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.993638 4796 generic.go:334] "Generic (PLEG): container finished" podID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerID="094f251d87f572ffc8b27dcf5da35d955f817fdba6afe4552acd9eff6777169b" exitCode=0 Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.993714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" event={"ID":"e6c83c32-76cc-4afb-a71f-02a66a616302","Type":"ContainerDied","Data":"094f251d87f572ffc8b27dcf5da35d955f817fdba6afe4552acd9eff6777169b"} Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.996220 4796 generic.go:334] "Generic (PLEG): container finished" podID="cd081b54-58a8-4bca-bf8e-091f67a9da36" containerID="0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504" exitCode=0 Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.996269 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-48zkg" Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.996287 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48zkg" event={"ID":"cd081b54-58a8-4bca-bf8e-091f67a9da36","Type":"ContainerDied","Data":"0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504"} Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.996313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-48zkg" event={"ID":"cd081b54-58a8-4bca-bf8e-091f67a9da36","Type":"ContainerDied","Data":"e6afde595bf8f760c0426c559cfba4d5a2aa28a4516bd2f62ef64816e9a064dc"} Jan 27 07:01:32 crc kubenswrapper[4796]: I0127 07:01:32.996337 4796 scope.go:117] "RemoveContainer" containerID="0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.022577 4796 scope.go:117] "RemoveContainer" containerID="2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.033432 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-48zkg"] Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.038561 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-48zkg"] Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.050730 4796 scope.go:117] "RemoveContainer" containerID="2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.072106 4796 scope.go:117] "RemoveContainer" containerID="0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504" Jan 27 07:01:33 crc kubenswrapper[4796]: E0127 07:01:33.074406 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504\": container with ID starting with 0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504 not found: ID does not exist" containerID="0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.074462 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504"} err="failed to get container status \"0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504\": rpc error: code = NotFound desc = could not find container \"0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504\": container with ID starting with 0478efb9957397e6cd71ed5a1350bececbd4416d3ff5a1e217568a5b6ebb6504 not found: ID does not exist" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.074486 4796 scope.go:117] "RemoveContainer" containerID="2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8" Jan 27 07:01:33 crc kubenswrapper[4796]: E0127 07:01:33.074919 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8\": container with ID starting with 2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8 not found: ID does not exist" containerID="2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.074966 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8"} err="failed to get container status \"2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8\": rpc error: code = NotFound desc = could not find container \"2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8\": container with ID starting with 2b800c0057407d438b7b8a02ca2b63b1a75877b5652fbc8c230790c2672f92f8 not found: ID does not exist" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.074996 4796 scope.go:117] "RemoveContainer" containerID="2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5" Jan 27 07:01:33 crc kubenswrapper[4796]: E0127 07:01:33.075376 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5\": container with ID starting with 2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5 not found: ID does not exist" containerID="2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.075424 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5"} err="failed to get container status \"2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5\": rpc error: code = NotFound desc = could not find container \"2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5\": container with ID starting with 2583a6ded9f43538e2449347a76fb005fe5f5c6238488da15529a7f30652cac5 not found: ID does not exist" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.166786 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:33 crc kubenswrapper[4796]: I0127 07:01:33.638209 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gnq6"] Jan 27 07:01:33 crc kubenswrapper[4796]: W0127 07:01:33.655962 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2edf3743_0387_4779_b77c_38f486e3eb2d.slice/crio-e3421cc915a525350ff74d8e0f76f5177f5533e7a9cd1ce412d811b7a61487f9 WatchSource:0}: Error finding container e3421cc915a525350ff74d8e0f76f5177f5533e7a9cd1ce412d811b7a61487f9: Status 404 returned error can't find the container with id e3421cc915a525350ff74d8e0f76f5177f5533e7a9cd1ce412d811b7a61487f9 Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.006162 4796 generic.go:334] "Generic (PLEG): container finished" podID="2edf3743-0387-4779-b77c-38f486e3eb2d" containerID="c37f83b9c49841f85028a6da18f78c28b31eaa9bd6e21741f743d277457558d0" exitCode=0 Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.006355 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnq6" event={"ID":"2edf3743-0387-4779-b77c-38f486e3eb2d","Type":"ContainerDied","Data":"c37f83b9c49841f85028a6da18f78c28b31eaa9bd6e21741f743d277457558d0"} Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.006398 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnq6" event={"ID":"2edf3743-0387-4779-b77c-38f486e3eb2d","Type":"ContainerStarted","Data":"e3421cc915a525350ff74d8e0f76f5177f5533e7a9cd1ce412d811b7a61487f9"} Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.302645 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.475748 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-util\") pod \"e6c83c32-76cc-4afb-a71f-02a66a616302\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.475909 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-bundle\") pod \"e6c83c32-76cc-4afb-a71f-02a66a616302\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.475993 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb2wd\" (UniqueName: \"kubernetes.io/projected/e6c83c32-76cc-4afb-a71f-02a66a616302-kube-api-access-qb2wd\") pod \"e6c83c32-76cc-4afb-a71f-02a66a616302\" (UID: \"e6c83c32-76cc-4afb-a71f-02a66a616302\") " Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.476698 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-bundle" (OuterVolumeSpecName: "bundle") pod "e6c83c32-76cc-4afb-a71f-02a66a616302" (UID: "e6c83c32-76cc-4afb-a71f-02a66a616302"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.480885 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c83c32-76cc-4afb-a71f-02a66a616302-kube-api-access-qb2wd" (OuterVolumeSpecName: "kube-api-access-qb2wd") pod "e6c83c32-76cc-4afb-a71f-02a66a616302" (UID: "e6c83c32-76cc-4afb-a71f-02a66a616302"). InnerVolumeSpecName "kube-api-access-qb2wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.507283 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-util" (OuterVolumeSpecName: "util") pod "e6c83c32-76cc-4afb-a71f-02a66a616302" (UID: "e6c83c32-76cc-4afb-a71f-02a66a616302"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.577817 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-bundle\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.577869 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb2wd\" (UniqueName: \"kubernetes.io/projected/e6c83c32-76cc-4afb-a71f-02a66a616302-kube-api-access-qb2wd\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.577886 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6c83c32-76cc-4afb-a71f-02a66a616302-util\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:34 crc kubenswrapper[4796]: I0127 07:01:34.759505 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd081b54-58a8-4bca-bf8e-091f67a9da36" path="/var/lib/kubelet/pods/cd081b54-58a8-4bca-bf8e-091f67a9da36/volumes" Jan 27 07:01:35 crc kubenswrapper[4796]: I0127 07:01:35.019799 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" event={"ID":"e6c83c32-76cc-4afb-a71f-02a66a616302","Type":"ContainerDied","Data":"addcd6d98b0a563e4a89fc112c24772b708584e01c40232569e9b4b4a4896e09"} Jan 27 07:01:35 crc kubenswrapper[4796]: I0127 07:01:35.019842 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="addcd6d98b0a563e4a89fc112c24772b708584e01c40232569e9b4b4a4896e09" Jan 27 07:01:35 crc kubenswrapper[4796]: I0127 07:01:35.019855 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5" Jan 27 07:01:38 crc kubenswrapper[4796]: I0127 07:01:38.038607 4796 generic.go:334] "Generic (PLEG): container finished" podID="2edf3743-0387-4779-b77c-38f486e3eb2d" containerID="f58fcd1905404297f1363ecf5d99d510542c522fe00ded5021cdc8ab8ec99286" exitCode=0 Jan 27 07:01:38 crc kubenswrapper[4796]: I0127 07:01:38.038672 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnq6" event={"ID":"2edf3743-0387-4779-b77c-38f486e3eb2d","Type":"ContainerDied","Data":"f58fcd1905404297f1363ecf5d99d510542c522fe00ded5021cdc8ab8ec99286"} Jan 27 07:01:39 crc kubenswrapper[4796]: I0127 07:01:39.049894 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gnq6" event={"ID":"2edf3743-0387-4779-b77c-38f486e3eb2d","Type":"ContainerStarted","Data":"62b4267c34ee7349eca62728547bb750927205b209dac80fd8abc69c29c56f22"} Jan 27 07:01:39 crc kubenswrapper[4796]: I0127 07:01:39.074554 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7gnq6" podStartSLOduration=2.5310089959999997 podStartE2EDuration="7.07451936s" podCreationTimestamp="2026-01-27 07:01:32 +0000 UTC" firstStartedPulling="2026-01-27 07:01:34.007906965 +0000 UTC m=+915.114874292" lastFinishedPulling="2026-01-27 07:01:38.551417319 +0000 UTC m=+919.658384656" observedRunningTime="2026-01-27 07:01:39.070313363 +0000 UTC m=+920.177280730" watchObservedRunningTime="2026-01-27 07:01:39.07451936 +0000 UTC m=+920.181486687" Jan 27 07:01:39 crc kubenswrapper[4796]: I0127 07:01:39.968699 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7"] Jan 27 07:01:39 crc kubenswrapper[4796]: E0127 07:01:39.968935 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerName="util" Jan 27 07:01:39 crc kubenswrapper[4796]: I0127 07:01:39.968947 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerName="util" Jan 27 07:01:39 crc kubenswrapper[4796]: E0127 07:01:39.968961 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerName="pull" Jan 27 07:01:39 crc kubenswrapper[4796]: I0127 07:01:39.968967 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerName="pull" Jan 27 07:01:39 crc kubenswrapper[4796]: E0127 07:01:39.968976 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerName="extract" Jan 27 07:01:39 crc kubenswrapper[4796]: I0127 07:01:39.968981 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerName="extract" Jan 27 07:01:39 crc kubenswrapper[4796]: I0127 07:01:39.969081 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6c83c32-76cc-4afb-a71f-02a66a616302" containerName="extract" Jan 27 07:01:39 crc kubenswrapper[4796]: I0127 07:01:39.969452 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" Jan 27 07:01:39 crc kubenswrapper[4796]: I0127 07:01:39.974439 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-fhr8x" Jan 27 07:01:40 crc kubenswrapper[4796]: I0127 07:01:40.007769 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7"] Jan 27 07:01:40 crc kubenswrapper[4796]: I0127 07:01:40.155164 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8b2\" (UniqueName: \"kubernetes.io/projected/b92c2b36-8c14-43e5-aa07-43aadfe3cda4-kube-api-access-gv8b2\") pod \"openstack-operator-controller-init-5c58fc478-n7ck7\" (UID: \"b92c2b36-8c14-43e5-aa07-43aadfe3cda4\") " pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" Jan 27 07:01:40 crc kubenswrapper[4796]: I0127 07:01:40.257176 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8b2\" (UniqueName: \"kubernetes.io/projected/b92c2b36-8c14-43e5-aa07-43aadfe3cda4-kube-api-access-gv8b2\") pod \"openstack-operator-controller-init-5c58fc478-n7ck7\" (UID: \"b92c2b36-8c14-43e5-aa07-43aadfe3cda4\") " pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" Jan 27 07:01:40 crc kubenswrapper[4796]: I0127 07:01:40.293716 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8b2\" (UniqueName: \"kubernetes.io/projected/b92c2b36-8c14-43e5-aa07-43aadfe3cda4-kube-api-access-gv8b2\") pod \"openstack-operator-controller-init-5c58fc478-n7ck7\" (UID: \"b92c2b36-8c14-43e5-aa07-43aadfe3cda4\") " pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" Jan 27 07:01:40 crc kubenswrapper[4796]: I0127 07:01:40.586462 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" Jan 27 07:01:41 crc kubenswrapper[4796]: I0127 07:01:41.066199 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7"] Jan 27 07:01:41 crc kubenswrapper[4796]: W0127 07:01:41.070417 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb92c2b36_8c14_43e5_aa07_43aadfe3cda4.slice/crio-4bc33fca28adbd0284c95dac8cfbeb4bf8bc856d849a300607ded701c6f7e972 WatchSource:0}: Error finding container 4bc33fca28adbd0284c95dac8cfbeb4bf8bc856d849a300607ded701c6f7e972: Status 404 returned error can't find the container with id 4bc33fca28adbd0284c95dac8cfbeb4bf8bc856d849a300607ded701c6f7e972 Jan 27 07:01:42 crc kubenswrapper[4796]: I0127 07:01:42.069442 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" event={"ID":"b92c2b36-8c14-43e5-aa07-43aadfe3cda4","Type":"ContainerStarted","Data":"4bc33fca28adbd0284c95dac8cfbeb4bf8bc856d849a300607ded701c6f7e972"} Jan 27 07:01:43 crc kubenswrapper[4796]: I0127 07:01:43.167217 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:43 crc kubenswrapper[4796]: I0127 07:01:43.167571 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:43 crc kubenswrapper[4796]: I0127 07:01:43.205916 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:44 crc kubenswrapper[4796]: I0127 07:01:44.145515 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7gnq6" Jan 27 07:01:45 crc kubenswrapper[4796]: I0127 07:01:45.270397 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gnq6"] Jan 27 07:01:45 crc kubenswrapper[4796]: I0127 07:01:45.624614 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8mzp"] Jan 27 07:01:45 crc kubenswrapper[4796]: I0127 07:01:45.625278 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z8mzp" podUID="74b88900-aeaa-4111-a665-af4559febdd8" containerName="registry-server" containerID="cri-o://be2e8cf3bd890a5674fac0f67ca99e7b68b40955b3215a7b7f954e16eb9fa108" gracePeriod=2 Jan 27 07:01:46 crc kubenswrapper[4796]: I0127 07:01:46.132190 4796 generic.go:334] "Generic (PLEG): container finished" podID="74b88900-aeaa-4111-a665-af4559febdd8" containerID="be2e8cf3bd890a5674fac0f67ca99e7b68b40955b3215a7b7f954e16eb9fa108" exitCode=0 Jan 27 07:01:46 crc kubenswrapper[4796]: I0127 07:01:46.132308 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8mzp" event={"ID":"74b88900-aeaa-4111-a665-af4559febdd8","Type":"ContainerDied","Data":"be2e8cf3bd890a5674fac0f67ca99e7b68b40955b3215a7b7f954e16eb9fa108"} Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.097487 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.143813 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z8mzp" event={"ID":"74b88900-aeaa-4111-a665-af4559febdd8","Type":"ContainerDied","Data":"68c881b892d164664cca29eaa6e5fb37e68f3ffcb9018726d5bfea67637a2125"} Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.143881 4796 scope.go:117] "RemoveContainer" containerID="be2e8cf3bd890a5674fac0f67ca99e7b68b40955b3215a7b7f954e16eb9fa108" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.143889 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z8mzp" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.158667 4796 scope.go:117] "RemoveContainer" containerID="100ea1d4cecf5ccb2b51c1b90a71c6b5887908500121ccfde33f7fce669eddf3" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.176405 4796 scope.go:117] "RemoveContainer" containerID="659bc79fe189fc0a3fe05ab7ba996e6bd30e9094df5f460478c977f4914e9741" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.190529 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-catalog-content\") pod \"74b88900-aeaa-4111-a665-af4559febdd8\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.190618 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-utilities\") pod \"74b88900-aeaa-4111-a665-af4559febdd8\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.190672 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml45x\" (UniqueName: \"kubernetes.io/projected/74b88900-aeaa-4111-a665-af4559febdd8-kube-api-access-ml45x\") pod \"74b88900-aeaa-4111-a665-af4559febdd8\" (UID: \"74b88900-aeaa-4111-a665-af4559febdd8\") " Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.191710 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-utilities" (OuterVolumeSpecName: "utilities") pod "74b88900-aeaa-4111-a665-af4559febdd8" (UID: "74b88900-aeaa-4111-a665-af4559febdd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.196526 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b88900-aeaa-4111-a665-af4559febdd8-kube-api-access-ml45x" (OuterVolumeSpecName: "kube-api-access-ml45x") pod "74b88900-aeaa-4111-a665-af4559febdd8" (UID: "74b88900-aeaa-4111-a665-af4559febdd8"). InnerVolumeSpecName "kube-api-access-ml45x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.238523 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74b88900-aeaa-4111-a665-af4559febdd8" (UID: "74b88900-aeaa-4111-a665-af4559febdd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.292208 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.292262 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74b88900-aeaa-4111-a665-af4559febdd8-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.292278 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml45x\" (UniqueName: \"kubernetes.io/projected/74b88900-aeaa-4111-a665-af4559febdd8-kube-api-access-ml45x\") on node \"crc\" DevicePath \"\"" Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.472312 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z8mzp"] Jan 27 07:01:47 crc kubenswrapper[4796]: I0127 07:01:47.477376 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z8mzp"] Jan 27 07:01:48 crc kubenswrapper[4796]: I0127 07:01:48.150631 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" event={"ID":"b92c2b36-8c14-43e5-aa07-43aadfe3cda4","Type":"ContainerStarted","Data":"9fbf700ac0485db3f1009b330e26c67336145297f4884148c61e457bda093994"} Jan 27 07:01:48 crc kubenswrapper[4796]: I0127 07:01:48.150784 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" Jan 27 07:01:48 crc kubenswrapper[4796]: I0127 07:01:48.185832 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" podStartSLOduration=3.129355185 podStartE2EDuration="9.185807825s" podCreationTimestamp="2026-01-27 07:01:39 +0000 UTC" firstStartedPulling="2026-01-27 07:01:41.072117161 +0000 UTC m=+922.179084508" lastFinishedPulling="2026-01-27 07:01:47.128569821 +0000 UTC m=+928.235537148" observedRunningTime="2026-01-27 07:01:48.18472582 +0000 UTC m=+929.291693147" watchObservedRunningTime="2026-01-27 07:01:48.185807825 +0000 UTC m=+929.292775152" Jan 27 07:01:48 crc kubenswrapper[4796]: I0127 07:01:48.757301 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b88900-aeaa-4111-a665-af4559febdd8" path="/var/lib/kubelet/pods/74b88900-aeaa-4111-a665-af4559febdd8/volumes" Jan 27 07:02:00 crc kubenswrapper[4796]: I0127 07:02:00.591205 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5c58fc478-n7ck7" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.643315 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7"] Jan 27 07:02:18 crc kubenswrapper[4796]: E0127 07:02:18.643998 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b88900-aeaa-4111-a665-af4559febdd8" containerName="registry-server" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.644013 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b88900-aeaa-4111-a665-af4559febdd8" containerName="registry-server" Jan 27 07:02:18 crc kubenswrapper[4796]: E0127 07:02:18.644032 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b88900-aeaa-4111-a665-af4559febdd8" containerName="extract-content" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.644040 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b88900-aeaa-4111-a665-af4559febdd8" containerName="extract-content" Jan 27 07:02:18 crc kubenswrapper[4796]: E0127 07:02:18.644055 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b88900-aeaa-4111-a665-af4559febdd8" containerName="extract-utilities" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.644063 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b88900-aeaa-4111-a665-af4559febdd8" containerName="extract-utilities" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.644212 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b88900-aeaa-4111-a665-af4559febdd8" containerName="registry-server" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.644666 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.648901 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-q2bg2" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.655987 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.657199 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.662040 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-244fl" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.668437 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.674956 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.675795 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.678448 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fpdrj" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.694319 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.706616 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.711432 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.712173 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.717050 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-snsd6" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.725240 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vgbz\" (UniqueName: \"kubernetes.io/projected/aa911582-096f-4b20-876c-c765de54b4fd-kube-api-access-5vgbz\") pod \"cinder-operator-controller-manager-655bf9cfbb-4ksng\" (UID: \"aa911582-096f-4b20-876c-c765de54b4fd\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.725312 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gn8v\" (UniqueName: \"kubernetes.io/projected/9aaf11e1-9a42-4ab2-a112-81b2d07e56d8-kube-api-access-7gn8v\") pod \"designate-operator-controller-manager-77554cdc5c-v7pbv\" (UID: \"9aaf11e1-9a42-4ab2-a112-81b2d07e56d8\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.725385 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q8xs\" (UniqueName: \"kubernetes.io/projected/742577d6-bab7-4548-a7b2-84f562498a1c-kube-api-access-9q8xs\") pod \"barbican-operator-controller-manager-65ff799cfd-k9hg7\" (UID: \"742577d6-bab7-4548-a7b2-84f562498a1c\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.734254 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.740632 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.741759 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.762047 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6m94d" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.763593 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.763697 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.764426 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.768792 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jghmp" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.785458 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.786453 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.788988 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vdtzr" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.794218 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.794957 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.798477 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.798700 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kscj6" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.809502 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.840168 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gn8v\" (UniqueName: \"kubernetes.io/projected/9aaf11e1-9a42-4ab2-a112-81b2d07e56d8-kube-api-access-7gn8v\") pod \"designate-operator-controller-manager-77554cdc5c-v7pbv\" (UID: \"9aaf11e1-9a42-4ab2-a112-81b2d07e56d8\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.840231 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfmfs\" (UniqueName: \"kubernetes.io/projected/789b993b-5e32-4cd6-811f-8aecbe093298-kube-api-access-xfmfs\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.840259 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rss\" (UniqueName: \"kubernetes.io/projected/af19b00f-4091-45da-b90b-15e1265c4239-kube-api-access-z2rss\") pod \"ironic-operator-controller-manager-768b776ffb-q44zr\" (UID: \"af19b00f-4091-45da-b90b-15e1265c4239\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.840301 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7snx\" (UniqueName: \"kubernetes.io/projected/eebb0fb3-128f-4de7-afb9-9d0d2963b51e-kube-api-access-r7snx\") pod \"glance-operator-controller-manager-67dd55ff59-z2nvf\" (UID: \"eebb0fb3-128f-4de7-afb9-9d0d2963b51e\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.840326 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q8xs\" (UniqueName: \"kubernetes.io/projected/742577d6-bab7-4548-a7b2-84f562498a1c-kube-api-access-9q8xs\") pod \"barbican-operator-controller-manager-65ff799cfd-k9hg7\" (UID: \"742577d6-bab7-4548-a7b2-84f562498a1c\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.840348 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtg5m\" (UniqueName: \"kubernetes.io/projected/1ea0aa0e-1344-489a-9f52-8a677dfaba38-kube-api-access-qtg5m\") pod \"horizon-operator-controller-manager-77d5c5b54f-mqmpw\" (UID: \"1ea0aa0e-1344-489a-9f52-8a677dfaba38\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.840367 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb97s\" (UniqueName: \"kubernetes.io/projected/649daee5-1c25-49bf-ade1-83c14d9603a3-kube-api-access-nb97s\") pod \"heat-operator-controller-manager-74866cc64d-fsxqp\" (UID: \"649daee5-1c25-49bf-ade1-83c14d9603a3\") " pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.840391 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.840438 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vgbz\" (UniqueName: \"kubernetes.io/projected/aa911582-096f-4b20-876c-c765de54b4fd-kube-api-access-5vgbz\") pod \"cinder-operator-controller-manager-655bf9cfbb-4ksng\" (UID: \"aa911582-096f-4b20-876c-c765de54b4fd\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.850605 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.853531 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.874047 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.891733 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q8xs\" (UniqueName: \"kubernetes.io/projected/742577d6-bab7-4548-a7b2-84f562498a1c-kube-api-access-9q8xs\") pod \"barbican-operator-controller-manager-65ff799cfd-k9hg7\" (UID: \"742577d6-bab7-4548-a7b2-84f562498a1c\") " pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.892403 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vgbz\" (UniqueName: \"kubernetes.io/projected/aa911582-096f-4b20-876c-c765de54b4fd-kube-api-access-5vgbz\") pod \"cinder-operator-controller-manager-655bf9cfbb-4ksng\" (UID: \"aa911582-096f-4b20-876c-c765de54b4fd\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.892504 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.898885 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-5stkw" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.943249 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw"] Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.984446 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.984501 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8dwz\" (UniqueName: \"kubernetes.io/projected/614461e3-64f7-4aa0-96a7-f8b31fefdbf1-kube-api-access-s8dwz\") pod \"keystone-operator-controller-manager-55f684fd56-rhfpn\" (UID: \"614461e3-64f7-4aa0-96a7-f8b31fefdbf1\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.984566 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfmfs\" (UniqueName: \"kubernetes.io/projected/789b993b-5e32-4cd6-811f-8aecbe093298-kube-api-access-xfmfs\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.984589 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rss\" (UniqueName: \"kubernetes.io/projected/af19b00f-4091-45da-b90b-15e1265c4239-kube-api-access-z2rss\") pod \"ironic-operator-controller-manager-768b776ffb-q44zr\" (UID: \"af19b00f-4091-45da-b90b-15e1265c4239\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.984622 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7snx\" (UniqueName: \"kubernetes.io/projected/eebb0fb3-128f-4de7-afb9-9d0d2963b51e-kube-api-access-r7snx\") pod \"glance-operator-controller-manager-67dd55ff59-z2nvf\" (UID: \"eebb0fb3-128f-4de7-afb9-9d0d2963b51e\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.984652 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb97s\" (UniqueName: \"kubernetes.io/projected/649daee5-1c25-49bf-ade1-83c14d9603a3-kube-api-access-nb97s\") pod \"heat-operator-controller-manager-74866cc64d-fsxqp\" (UID: \"649daee5-1c25-49bf-ade1-83c14d9603a3\") " pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.984670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtg5m\" (UniqueName: \"kubernetes.io/projected/1ea0aa0e-1344-489a-9f52-8a677dfaba38-kube-api-access-qtg5m\") pod \"horizon-operator-controller-manager-77d5c5b54f-mqmpw\" (UID: \"1ea0aa0e-1344-489a-9f52-8a677dfaba38\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" Jan 27 07:02:18 crc kubenswrapper[4796]: E0127 07:02:18.984902 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:18 crc kubenswrapper[4796]: E0127 07:02:18.984946 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert podName:789b993b-5e32-4cd6-811f-8aecbe093298 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:19.484931 +0000 UTC m=+960.591898327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert") pod "infra-operator-controller-manager-7d75bc88d5-7l6kl" (UID: "789b993b-5e32-4cd6-811f-8aecbe093298") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.985723 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.986638 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gn8v\" (UniqueName: \"kubernetes.io/projected/9aaf11e1-9a42-4ab2-a112-81b2d07e56d8-kube-api-access-7gn8v\") pod \"designate-operator-controller-manager-77554cdc5c-v7pbv\" (UID: \"9aaf11e1-9a42-4ab2-a112-81b2d07e56d8\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.990902 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.994757 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" Jan 27 07:02:18 crc kubenswrapper[4796]: I0127 07:02:18.995640 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cnxq7" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.006565 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.033900 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7snx\" (UniqueName: \"kubernetes.io/projected/eebb0fb3-128f-4de7-afb9-9d0d2963b51e-kube-api-access-r7snx\") pod \"glance-operator-controller-manager-67dd55ff59-z2nvf\" (UID: \"eebb0fb3-128f-4de7-afb9-9d0d2963b51e\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.034579 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.037587 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.034896 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.040325 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb97s\" (UniqueName: \"kubernetes.io/projected/649daee5-1c25-49bf-ade1-83c14d9603a3-kube-api-access-nb97s\") pod \"heat-operator-controller-manager-74866cc64d-fsxqp\" (UID: \"649daee5-1c25-49bf-ade1-83c14d9603a3\") " pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.041896 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.047305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rss\" (UniqueName: \"kubernetes.io/projected/af19b00f-4091-45da-b90b-15e1265c4239-kube-api-access-z2rss\") pod \"ironic-operator-controller-manager-768b776ffb-q44zr\" (UID: \"af19b00f-4091-45da-b90b-15e1265c4239\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.048364 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfmfs\" (UniqueName: \"kubernetes.io/projected/789b993b-5e32-4cd6-811f-8aecbe093298-kube-api-access-xfmfs\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.048577 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wgxls" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.062812 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.070402 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.075043 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.076128 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtg5m\" (UniqueName: \"kubernetes.io/projected/1ea0aa0e-1344-489a-9f52-8a677dfaba38-kube-api-access-qtg5m\") pod \"horizon-operator-controller-manager-77d5c5b54f-mqmpw\" (UID: \"1ea0aa0e-1344-489a-9f52-8a677dfaba38\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.087910 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.088828 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4465\" (UniqueName: \"kubernetes.io/projected/54314102-584a-4445-9477-7b089fabe859-kube-api-access-j4465\") pod \"manila-operator-controller-manager-849fcfbb6b-cpcgw\" (UID: \"54314102-584a-4445-9477-7b089fabe859\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.088902 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8dwz\" (UniqueName: \"kubernetes.io/projected/614461e3-64f7-4aa0-96a7-f8b31fefdbf1-kube-api-access-s8dwz\") pod \"keystone-operator-controller-manager-55f684fd56-rhfpn\" (UID: \"614461e3-64f7-4aa0-96a7-f8b31fefdbf1\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.091591 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.092384 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.101088 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-q4mx8" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.105978 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.124468 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8dwz\" (UniqueName: \"kubernetes.io/projected/614461e3-64f7-4aa0-96a7-f8b31fefdbf1-kube-api-access-s8dwz\") pod \"keystone-operator-controller-manager-55f684fd56-rhfpn\" (UID: \"614461e3-64f7-4aa0-96a7-f8b31fefdbf1\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.146943 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.161327 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.163435 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.171761 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-wwx97" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.184468 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.184736 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.185679 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.187642 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gcqlb" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.190426 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.191488 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4465\" (UniqueName: \"kubernetes.io/projected/54314102-584a-4445-9477-7b089fabe859-kube-api-access-j4465\") pod \"manila-operator-controller-manager-849fcfbb6b-cpcgw\" (UID: \"54314102-584a-4445-9477-7b089fabe859\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.191564 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f8hc\" (UniqueName: \"kubernetes.io/projected/2882c56b-f839-4eb0-8b9c-9dce77a548ae-kube-api-access-7f8hc\") pod \"neutron-operator-controller-manager-7ffd8d76d4-m6r6w\" (UID: \"2882c56b-f839-4eb0-8b9c-9dce77a548ae\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.191609 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bpkp\" (UniqueName: \"kubernetes.io/projected/d114853c-9a01-4565-b547-aaccd3ab9d26-kube-api-access-4bpkp\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv\" (UID: \"d114853c-9a01-4565-b547-aaccd3ab9d26\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.224633 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.225408 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.228791 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7qfkm" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.233916 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.234708 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.236807 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xzt69" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.237113 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.247424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4465\" (UniqueName: \"kubernetes.io/projected/54314102-584a-4445-9477-7b089fabe859-kube-api-access-j4465\") pod \"manila-operator-controller-manager-849fcfbb6b-cpcgw\" (UID: \"54314102-584a-4445-9477-7b089fabe859\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.252464 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.253345 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.257336 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-rmnh9" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.285025 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.285876 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.288700 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wphjv" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.293165 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8gt9\" (UniqueName: \"kubernetes.io/projected/08457a1a-2d34-4d6f-8df1-c40f80daf96b-kube-api-access-v8gt9\") pod \"nova-operator-controller-manager-7f54b7d6d4-gh6tm\" (UID: \"08457a1a-2d34-4d6f-8df1-c40f80daf96b\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.293222 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f8hc\" (UniqueName: \"kubernetes.io/projected/2882c56b-f839-4eb0-8b9c-9dce77a548ae-kube-api-access-7f8hc\") pod \"neutron-operator-controller-manager-7ffd8d76d4-m6r6w\" (UID: \"2882c56b-f839-4eb0-8b9c-9dce77a548ae\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.293244 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnj2q\" (UniqueName: \"kubernetes.io/projected/baedb6b3-ef71-431f-aab8-2acd5b458e71-kube-api-access-hnj2q\") pod \"octavia-operator-controller-manager-7875d7675-26p8v\" (UID: \"baedb6b3-ef71-431f-aab8-2acd5b458e71\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.293561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bpkp\" (UniqueName: \"kubernetes.io/projected/d114853c-9a01-4565-b547-aaccd3ab9d26-kube-api-access-4bpkp\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv\" (UID: \"d114853c-9a01-4565-b547-aaccd3ab9d26\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.293596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsjjc\" (UniqueName: \"kubernetes.io/projected/dd9ed078-9356-4d55-96d5-33039c9a0c68-kube-api-access-lsjjc\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.293628 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.293651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7pt6\" (UniqueName: \"kubernetes.io/projected/bd76656d-eae6-4878-ae92-c27deac760ac-kube-api-access-c7pt6\") pod \"ovn-operator-controller-manager-6f75f45d54-mrhx6\" (UID: \"bd76656d-eae6-4878-ae92-c27deac760ac\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.297202 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.297704 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.298436 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.306421 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.309831 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.310959 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2chjh" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.320032 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.322444 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f8hc\" (UniqueName: \"kubernetes.io/projected/2882c56b-f839-4eb0-8b9c-9dce77a548ae-kube-api-access-7f8hc\") pod \"neutron-operator-controller-manager-7ffd8d76d4-m6r6w\" (UID: \"2882c56b-f839-4eb0-8b9c-9dce77a548ae\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.327649 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.330032 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bpkp\" (UniqueName: \"kubernetes.io/projected/d114853c-9a01-4565-b547-aaccd3ab9d26-kube-api-access-4bpkp\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv\" (UID: \"d114853c-9a01-4565-b547-aaccd3ab9d26\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.338253 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.361404 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.369098 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.370042 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.373562 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-bsptv" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.385703 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.395505 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnj2q\" (UniqueName: \"kubernetes.io/projected/baedb6b3-ef71-431f-aab8-2acd5b458e71-kube-api-access-hnj2q\") pod \"octavia-operator-controller-manager-7875d7675-26p8v\" (UID: \"baedb6b3-ef71-431f-aab8-2acd5b458e71\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.395834 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsjjc\" (UniqueName: \"kubernetes.io/projected/dd9ed078-9356-4d55-96d5-33039c9a0c68-kube-api-access-lsjjc\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.395879 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8sxv\" (UniqueName: \"kubernetes.io/projected/013b66ef-a690-4075-9afc-9c9dd1822a3f-kube-api-access-p8sxv\") pod \"placement-operator-controller-manager-79d5ccc684-mcxtc\" (UID: \"013b66ef-a690-4075-9afc-9c9dd1822a3f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.395910 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.395935 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7pt6\" (UniqueName: \"kubernetes.io/projected/bd76656d-eae6-4878-ae92-c27deac760ac-kube-api-access-c7pt6\") pod \"ovn-operator-controller-manager-6f75f45d54-mrhx6\" (UID: \"bd76656d-eae6-4878-ae92-c27deac760ac\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.395954 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94c7w\" (UniqueName: \"kubernetes.io/projected/d0fc3b17-8640-4ad4-8794-890980c2cd92-kube-api-access-94c7w\") pod \"swift-operator-controller-manager-547cbdb99f-j7dc7\" (UID: \"d0fc3b17-8640-4ad4-8794-890980c2cd92\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.395979 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8gt9\" (UniqueName: \"kubernetes.io/projected/08457a1a-2d34-4d6f-8df1-c40f80daf96b-kube-api-access-v8gt9\") pod \"nova-operator-controller-manager-7f54b7d6d4-gh6tm\" (UID: \"08457a1a-2d34-4d6f-8df1-c40f80daf96b\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.396003 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw682\" (UniqueName: \"kubernetes.io/projected/7e4727d2-ec3b-4b51-bdc4-d4bf1f91c0ed-kube-api-access-bw682\") pod \"telemetry-operator-controller-manager-799bc87c89-dt7z6\" (UID: \"7e4727d2-ec3b-4b51-bdc4-d4bf1f91c0ed\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.396436 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.396476 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert podName:dd9ed078-9356-4d55-96d5-33039c9a0c68 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:19.896463285 +0000 UTC m=+961.003430612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" (UID: "dd9ed078-9356-4d55-96d5-33039c9a0c68") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.420572 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnj2q\" (UniqueName: \"kubernetes.io/projected/baedb6b3-ef71-431f-aab8-2acd5b458e71-kube-api-access-hnj2q\") pod \"octavia-operator-controller-manager-7875d7675-26p8v\" (UID: \"baedb6b3-ef71-431f-aab8-2acd5b458e71\") " pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.428159 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsjjc\" (UniqueName: \"kubernetes.io/projected/dd9ed078-9356-4d55-96d5-33039c9a0c68-kube-api-access-lsjjc\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.429620 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8gt9\" (UniqueName: \"kubernetes.io/projected/08457a1a-2d34-4d6f-8df1-c40f80daf96b-kube-api-access-v8gt9\") pod \"nova-operator-controller-manager-7f54b7d6d4-gh6tm\" (UID: \"08457a1a-2d34-4d6f-8df1-c40f80daf96b\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.429695 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.430599 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.433108 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rrqvf" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.434651 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.434813 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7pt6\" (UniqueName: \"kubernetes.io/projected/bd76656d-eae6-4878-ae92-c27deac760ac-kube-api-access-c7pt6\") pod \"ovn-operator-controller-manager-6f75f45d54-mrhx6\" (UID: \"bd76656d-eae6-4878-ae92-c27deac760ac\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.440934 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.459499 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.504776 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.505300 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw682\" (UniqueName: \"kubernetes.io/projected/7e4727d2-ec3b-4b51-bdc4-d4bf1f91c0ed-kube-api-access-bw682\") pod \"telemetry-operator-controller-manager-799bc87c89-dt7z6\" (UID: \"7e4727d2-ec3b-4b51-bdc4-d4bf1f91c0ed\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.505381 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jw4t\" (UniqueName: \"kubernetes.io/projected/dde7d536-fd29-4887-82eb-41b2002bf874-kube-api-access-9jw4t\") pod \"test-operator-controller-manager-69797bbcbd-8lwww\" (UID: \"dde7d536-fd29-4887-82eb-41b2002bf874\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.505418 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.505436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdlk\" (UniqueName: \"kubernetes.io/projected/e68c56f0-76cf-4607-8235-ecd1445ba2de-kube-api-access-whdlk\") pod \"watcher-operator-controller-manager-75db85654f-7m7qw\" (UID: \"e68c56f0-76cf-4607-8235-ecd1445ba2de\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.505466 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8sxv\" (UniqueName: \"kubernetes.io/projected/013b66ef-a690-4075-9afc-9c9dd1822a3f-kube-api-access-p8sxv\") pod \"placement-operator-controller-manager-79d5ccc684-mcxtc\" (UID: \"013b66ef-a690-4075-9afc-9c9dd1822a3f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.505515 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94c7w\" (UniqueName: \"kubernetes.io/projected/d0fc3b17-8640-4ad4-8794-890980c2cd92-kube-api-access-94c7w\") pod \"swift-operator-controller-manager-547cbdb99f-j7dc7\" (UID: \"d0fc3b17-8640-4ad4-8794-890980c2cd92\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.505979 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.506649 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.506696 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert podName:789b993b-5e32-4cd6-811f-8aecbe093298 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:20.50667763 +0000 UTC m=+961.613644957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert") pod "infra-operator-controller-manager-7d75bc88d5-7l6kl" (UID: "789b993b-5e32-4cd6-811f-8aecbe093298") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.541521 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94c7w\" (UniqueName: \"kubernetes.io/projected/d0fc3b17-8640-4ad4-8794-890980c2cd92-kube-api-access-94c7w\") pod \"swift-operator-controller-manager-547cbdb99f-j7dc7\" (UID: \"d0fc3b17-8640-4ad4-8794-890980c2cd92\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.541661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw682\" (UniqueName: \"kubernetes.io/projected/7e4727d2-ec3b-4b51-bdc4-d4bf1f91c0ed-kube-api-access-bw682\") pod \"telemetry-operator-controller-manager-799bc87c89-dt7z6\" (UID: \"7e4727d2-ec3b-4b51-bdc4-d4bf1f91c0ed\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.543072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8sxv\" (UniqueName: \"kubernetes.io/projected/013b66ef-a690-4075-9afc-9c9dd1822a3f-kube-api-access-p8sxv\") pod \"placement-operator-controller-manager-79d5ccc684-mcxtc\" (UID: \"013b66ef-a690-4075-9afc-9c9dd1822a3f\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.549936 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.551088 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.552836 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.553284 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.553284 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.553345 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rdp74" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.565120 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.583256 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.583603 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.584356 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.586584 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jnshp" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.593123 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw"] Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.608206 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm2st\" (UniqueName: \"kubernetes.io/projected/214120a9-fff2-405b-badc-8dbe507546bd-kube-api-access-bm2st\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.608288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jw4t\" (UniqueName: \"kubernetes.io/projected/dde7d536-fd29-4887-82eb-41b2002bf874-kube-api-access-9jw4t\") pod \"test-operator-controller-manager-69797bbcbd-8lwww\" (UID: \"dde7d536-fd29-4887-82eb-41b2002bf874\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.608325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whdlk\" (UniqueName: \"kubernetes.io/projected/e68c56f0-76cf-4607-8235-ecd1445ba2de-kube-api-access-whdlk\") pod \"watcher-operator-controller-manager-75db85654f-7m7qw\" (UID: \"e68c56f0-76cf-4607-8235-ecd1445ba2de\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.608394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.608429 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.615949 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.633326 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whdlk\" (UniqueName: \"kubernetes.io/projected/e68c56f0-76cf-4607-8235-ecd1445ba2de-kube-api-access-whdlk\") pod \"watcher-operator-controller-manager-75db85654f-7m7qw\" (UID: \"e68c56f0-76cf-4607-8235-ecd1445ba2de\") " pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.636856 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.640083 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jw4t\" (UniqueName: \"kubernetes.io/projected/dde7d536-fd29-4887-82eb-41b2002bf874-kube-api-access-9jw4t\") pod \"test-operator-controller-manager-69797bbcbd-8lwww\" (UID: \"dde7d536-fd29-4887-82eb-41b2002bf874\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.712277 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.712629 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.712664 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm2st\" (UniqueName: \"kubernetes.io/projected/214120a9-fff2-405b-badc-8dbe507546bd-kube-api-access-bm2st\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.712689 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9gdk\" (UniqueName: \"kubernetes.io/projected/777dc0b6-3bda-495c-835b-069590aa2c0f-kube-api-access-k9gdk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xjlcw\" (UID: \"777dc0b6-3bda-495c-835b-069590aa2c0f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.713146 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.713200 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:20.213185739 +0000 UTC m=+961.320153066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "webhook-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.713438 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.713462 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:20.213455285 +0000 UTC m=+961.320422612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "metrics-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.749173 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm2st\" (UniqueName: \"kubernetes.io/projected/214120a9-fff2-405b-badc-8dbe507546bd-kube-api-access-bm2st\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.813767 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9gdk\" (UniqueName: \"kubernetes.io/projected/777dc0b6-3bda-495c-835b-069590aa2c0f-kube-api-access-k9gdk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xjlcw\" (UID: \"777dc0b6-3bda-495c-835b-069590aa2c0f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.834243 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9gdk\" (UniqueName: \"kubernetes.io/projected/777dc0b6-3bda-495c-835b-069590aa2c0f-kube-api-access-k9gdk\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xjlcw\" (UID: \"777dc0b6-3bda-495c-835b-069590aa2c0f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.850887 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.883832 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.915137 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.915330 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: E0127 07:02:19.915407 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert podName:dd9ed078-9356-4d55-96d5-33039c9a0c68 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:20.91538835 +0000 UTC m=+962.022355677 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" (UID: "dd9ed078-9356-4d55-96d5-33039c9a0c68") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:19 crc kubenswrapper[4796]: I0127 07:02:19.929909 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.219208 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.219279 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.219479 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.219546 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:21.219510564 +0000 UTC m=+962.326477891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "metrics-server-cert" not found Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.219586 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.219607 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:21.219598816 +0000 UTC m=+962.326566143 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "webhook-server-cert" not found Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.240717 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.249396 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.255396 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.261719 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7"] Jan 27 07:02:20 crc kubenswrapper[4796]: W0127 07:02:20.264816 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeebb0fb3_128f_4de7_afb9_9d0d2963b51e.slice/crio-cae45a2e9a8bec43c8239bb996a96cf8118521b59e17ce8ba996ed28498134f4 WatchSource:0}: Error finding container cae45a2e9a8bec43c8239bb996a96cf8118521b59e17ce8ba996ed28498134f4: Status 404 returned error can't find the container with id cae45a2e9a8bec43c8239bb996a96cf8118521b59e17ce8ba996ed28498134f4 Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.266633 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp"] Jan 27 07:02:20 crc kubenswrapper[4796]: W0127 07:02:20.266729 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod742577d6_bab7_4548_a7b2_84f562498a1c.slice/crio-54225be6c18b06a40140e6c8454a1ef8ad2f0e4e631132605f0d22757d7ae9f0 WatchSource:0}: Error finding container 54225be6c18b06a40140e6c8454a1ef8ad2f0e4e631132605f0d22757d7ae9f0: Status 404 returned error can't find the container with id 54225be6c18b06a40140e6c8454a1ef8ad2f0e4e631132605f0d22757d7ae9f0 Jan 27 07:02:20 crc kubenswrapper[4796]: W0127 07:02:20.267837 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa911582_096f_4b20_876c_c765de54b4fd.slice/crio-bd58ccb37da2e41a2e39173a978d69d24fd952ad3438164c8dee965c00e32fa4 WatchSource:0}: Error finding container bd58ccb37da2e41a2e39173a978d69d24fd952ad3438164c8dee965c00e32fa4: Status 404 returned error can't find the container with id bd58ccb37da2e41a2e39173a978d69d24fd952ad3438164c8dee965c00e32fa4 Jan 27 07:02:20 crc kubenswrapper[4796]: W0127 07:02:20.268949 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod649daee5_1c25_49bf_ade1_83c14d9603a3.slice/crio-facb637fb118ce9c2a798409348d13952b2bd1b579ec0c987c3bb1022175aec5 WatchSource:0}: Error finding container facb637fb118ce9c2a798409348d13952b2bd1b579ec0c987c3bb1022175aec5: Status 404 returned error can't find the container with id facb637fb118ce9c2a798409348d13952b2bd1b579ec0c987c3bb1022175aec5 Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.534109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.534266 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.534349 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert podName:789b993b-5e32-4cd6-811f-8aecbe093298 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:22.534330374 +0000 UTC m=+963.641297701 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert") pod "infra-operator-controller-manager-7d75bc88d5-7l6kl" (UID: "789b993b-5e32-4cd6-811f-8aecbe093298") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.618726 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.633868 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" event={"ID":"aa911582-096f-4b20-876c-c765de54b4fd","Type":"ContainerStarted","Data":"bd58ccb37da2e41a2e39173a978d69d24fd952ad3438164c8dee965c00e32fa4"} Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.636272 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" event={"ID":"9aaf11e1-9a42-4ab2-a112-81b2d07e56d8","Type":"ContainerStarted","Data":"46099ef16a5d9773240a8ad20bc9182a030cdf745702fc238590f69a72be40ed"} Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.637156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" event={"ID":"742577d6-bab7-4548-a7b2-84f562498a1c","Type":"ContainerStarted","Data":"54225be6c18b06a40140e6c8454a1ef8ad2f0e4e631132605f0d22757d7ae9f0"} Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.637880 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" event={"ID":"eebb0fb3-128f-4de7-afb9-9d0d2963b51e","Type":"ContainerStarted","Data":"cae45a2e9a8bec43c8239bb996a96cf8118521b59e17ce8ba996ed28498134f4"} Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.638422 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" event={"ID":"649daee5-1c25-49bf-ade1-83c14d9603a3","Type":"ContainerStarted","Data":"facb637fb118ce9c2a798409348d13952b2bd1b579ec0c987c3bb1022175aec5"} Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.643207 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.669016 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc"] Jan 27 07:02:20 crc kubenswrapper[4796]: W0127 07:02:20.674700 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf19b00f_4091_45da_b90b_15e1265c4239.slice/crio-38872d1b2f488c4ae7cfedc8f62760bc580b353348595ed121ec72e0264233b5 WatchSource:0}: Error finding container 38872d1b2f488c4ae7cfedc8f62760bc580b353348595ed121ec72e0264233b5: Status 404 returned error can't find the container with id 38872d1b2f488c4ae7cfedc8f62760bc580b353348595ed121ec72e0264233b5 Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.676545 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr"] Jan 27 07:02:20 crc kubenswrapper[4796]: W0127 07:02:20.681556 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod013b66ef_a690_4075_9afc_9c9dd1822a3f.slice/crio-9f1a2255ef734eb8e797cf8cae8d0c01fc8dcb43dfe5b243a9ba49e6881ef0e0 WatchSource:0}: Error finding container 9f1a2255ef734eb8e797cf8cae8d0c01fc8dcb43dfe5b243a9ba49e6881ef0e0: Status 404 returned error can't find the container with id 9f1a2255ef734eb8e797cf8cae8d0c01fc8dcb43dfe5b243a9ba49e6881ef0e0 Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.686526 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw"] Jan 27 07:02:20 crc kubenswrapper[4796]: W0127 07:02:20.688807 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaedb6b3_ef71_431f_aab8_2acd5b458e71.slice/crio-c644846eaa4e1232c2fcb9eaab19deed34ad71c81ae9f1bd9c75bf6c53ab84f7 WatchSource:0}: Error finding container c644846eaa4e1232c2fcb9eaab19deed34ad71c81ae9f1bd9c75bf6c53ab84f7: Status 404 returned error can't find the container with id c644846eaa4e1232c2fcb9eaab19deed34ad71c81ae9f1bd9c75bf6c53ab84f7 Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.699075 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v"] Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.712308 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4465,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-849fcfbb6b-cpcgw_openstack-operators(54314102-584a-4445-9477-7b089fabe859): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.712568 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnj2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7875d7675-26p8v_openstack-operators(baedb6b3-ef71-431f-aab8-2acd5b458e71): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.713747 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" podUID="baedb6b3-ef71-431f-aab8-2acd5b458e71" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.713773 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" podUID="54314102-584a-4445-9477-7b089fabe859" Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.723109 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6"] Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.725957 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4bpkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv_openstack-operators(d114853c-9a01-4565-b547-aaccd3ab9d26): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.727476 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" podUID="d114853c-9a01-4565-b547-aaccd3ab9d26" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.732798 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qtg5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-mqmpw_openstack-operators(1ea0aa0e-1344-489a-9f52-8a677dfaba38): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.732915 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7f8hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-m6r6w_openstack-operators(2882c56b-f839-4eb0-8b9c-9dce77a548ae): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.733061 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw"] Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.733889 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" podUID="1ea0aa0e-1344-489a-9f52-8a677dfaba38" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.733955 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" podUID="2882c56b-f839-4eb0-8b9c-9dce77a548ae" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.735036 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9jw4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-8lwww_openstack-operators(dde7d536-fd29-4887-82eb-41b2002bf874): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.736223 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" podUID="dde7d536-fd29-4887-82eb-41b2002bf874" Jan 27 07:02:20 crc kubenswrapper[4796]: W0127 07:02:20.763226 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod777dc0b6_3bda_495c_835b_069590aa2c0f.slice/crio-47dbaab072ae50f15bd12d6fd44e1e248eda99d154a4e40e144fdb8c8bee515b WatchSource:0}: Error finding container 47dbaab072ae50f15bd12d6fd44e1e248eda99d154a4e40e144fdb8c8bee515b: Status 404 returned error can't find the container with id 47dbaab072ae50f15bd12d6fd44e1e248eda99d154a4e40e144fdb8c8bee515b Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.778576 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:01f06b67539933628ebeb3c8fe813467eb6b478ba9fd4c6ff9892b8306e04f7a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whdlk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75db85654f-7m7qw_openstack-operators(e68c56f0-76cf-4607-8235-ecd1445ba2de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.779934 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" podUID="e68c56f0-76cf-4607-8235-ecd1445ba2de" Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.783188 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.783225 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn"] Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.784168 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k9gdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xjlcw_openstack-operators(777dc0b6-3bda-495c-835b-069590aa2c0f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.785779 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" podUID="777dc0b6-3bda-495c-835b-069590aa2c0f" Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.787688 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.792225 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.796462 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.800653 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.803836 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw"] Jan 27 07:02:20 crc kubenswrapper[4796]: I0127 07:02:20.940916 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.941366 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:20 crc kubenswrapper[4796]: E0127 07:02:20.941413 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert podName:dd9ed078-9356-4d55-96d5-33039c9a0c68 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:22.941398837 +0000 UTC m=+964.048366164 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" (UID: "dd9ed078-9356-4d55-96d5-33039c9a0c68") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.245974 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.246162 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.246242 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:23.246220956 +0000 UTC m=+964.353188283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "webhook-server-cert" not found Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.246369 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.246633 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.246726 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:23.246706978 +0000 UTC m=+964.353674315 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "metrics-server-cert" not found Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.666469 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" event={"ID":"d114853c-9a01-4565-b547-aaccd3ab9d26","Type":"ContainerStarted","Data":"bd48ca578f922f9fc4355b77af1052d041ee743999741290d020aa36fef42918"} Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.669030 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" event={"ID":"bd76656d-eae6-4878-ae92-c27deac760ac","Type":"ContainerStarted","Data":"fedb13d3765e5dbb37255098b0e39800530d09f98b7dfc6836a1c0fa05da2d39"} Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.671559 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" podUID="d114853c-9a01-4565-b547-aaccd3ab9d26" Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.674071 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" event={"ID":"08457a1a-2d34-4d6f-8df1-c40f80daf96b","Type":"ContainerStarted","Data":"9c2e741b524fca06af96f961303558ab87d86e2f4538eeb565f32c76bc6cacc6"} Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.686568 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" event={"ID":"777dc0b6-3bda-495c-835b-069590aa2c0f","Type":"ContainerStarted","Data":"47dbaab072ae50f15bd12d6fd44e1e248eda99d154a4e40e144fdb8c8bee515b"} Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.687752 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" event={"ID":"baedb6b3-ef71-431f-aab8-2acd5b458e71","Type":"ContainerStarted","Data":"c644846eaa4e1232c2fcb9eaab19deed34ad71c81ae9f1bd9c75bf6c53ab84f7"} Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.688028 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" podUID="777dc0b6-3bda-495c-835b-069590aa2c0f" Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.688945 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" podUID="baedb6b3-ef71-431f-aab8-2acd5b458e71" Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.690653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" event={"ID":"013b66ef-a690-4075-9afc-9c9dd1822a3f","Type":"ContainerStarted","Data":"9f1a2255ef734eb8e797cf8cae8d0c01fc8dcb43dfe5b243a9ba49e6881ef0e0"} Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.692935 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" event={"ID":"af19b00f-4091-45da-b90b-15e1265c4239","Type":"ContainerStarted","Data":"38872d1b2f488c4ae7cfedc8f62760bc580b353348595ed121ec72e0264233b5"} Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.695410 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" event={"ID":"614461e3-64f7-4aa0-96a7-f8b31fefdbf1","Type":"ContainerStarted","Data":"746fed9eb9fec5495470aa75be00a835b8588b2eedc02cf781201604c4062c04"} Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.698205 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" event={"ID":"d0fc3b17-8640-4ad4-8794-890980c2cd92","Type":"ContainerStarted","Data":"38901afb6108f37dc2239ddaf7edc20164e14e06c704ee72d266bb2fafefb805"} Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.707261 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" event={"ID":"dde7d536-fd29-4887-82eb-41b2002bf874","Type":"ContainerStarted","Data":"483292331e43bde55b687979902aed8fe1268f0449753503fade4f4020f9b7e9"} Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.711756 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" event={"ID":"1ea0aa0e-1344-489a-9f52-8a677dfaba38","Type":"ContainerStarted","Data":"f0ddd35fb5f1bfb582bd703700dedef5206ad9283611304312ed3eb48a118b22"} Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.712277 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" podUID="dde7d536-fd29-4887-82eb-41b2002bf874" Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.714887 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" podUID="1ea0aa0e-1344-489a-9f52-8a677dfaba38" Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.717275 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" event={"ID":"54314102-584a-4445-9477-7b089fabe859","Type":"ContainerStarted","Data":"57a3e88d0d82311e2589207394448d5c082d57c1ae8dc47f1f18a853c1140624"} Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.720080 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" podUID="54314102-584a-4445-9477-7b089fabe859" Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.720800 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" event={"ID":"2882c56b-f839-4eb0-8b9c-9dce77a548ae","Type":"ContainerStarted","Data":"2080e74f87e423dcc0b770c2b828d31b833e36ab396fb4d1397c39710b7f3f4a"} Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.722849 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" podUID="2882c56b-f839-4eb0-8b9c-9dce77a548ae" Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.726831 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" event={"ID":"e68c56f0-76cf-4607-8235-ecd1445ba2de","Type":"ContainerStarted","Data":"c5e03c5e779e367f333e76213a53bac739fb534af5d2105ef67e888a8e6f665d"} Jan 27 07:02:21 crc kubenswrapper[4796]: I0127 07:02:21.739167 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" event={"ID":"7e4727d2-ec3b-4b51-bdc4-d4bf1f91c0ed","Type":"ContainerStarted","Data":"f90b6aaf7340bce29e58bfd62b19060bad5a9f0c53b948d7fb1181a38bbcc8d7"} Jan 27 07:02:21 crc kubenswrapper[4796]: E0127 07:02:21.749602 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:01f06b67539933628ebeb3c8fe813467eb6b478ba9fd4c6ff9892b8306e04f7a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" podUID="e68c56f0-76cf-4607-8235-ecd1445ba2de" Jan 27 07:02:22 crc kubenswrapper[4796]: I0127 07:02:22.570963 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.571138 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.571229 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert podName:789b993b-5e32-4cd6-811f-8aecbe093298 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:26.57120315 +0000 UTC m=+967.678170467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert") pod "infra-operator-controller-manager-7d75bc88d5-7l6kl" (UID: "789b993b-5e32-4cd6-811f-8aecbe093298") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.747449 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" podUID="777dc0b6-3bda-495c-835b-069590aa2c0f" Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.747500 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/manila-operator@sha256:82feceb236aaeae01761b172c94173d2624fe12feeb76a18c8aa2a664bafaf84\\\"\"" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" podUID="54314102-584a-4445-9477-7b089fabe859" Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.747958 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" podUID="2882c56b-f839-4eb0-8b9c-9dce77a548ae" Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.747964 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:01f06b67539933628ebeb3c8fe813467eb6b478ba9fd4c6ff9892b8306e04f7a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" podUID="e68c56f0-76cf-4607-8235-ecd1445ba2de" Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.748282 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" podUID="dde7d536-fd29-4887-82eb-41b2002bf874" Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.748385 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" podUID="d114853c-9a01-4565-b547-aaccd3ab9d26" Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.748434 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/octavia-operator@sha256:bb8d23f38682e4b987b621a3116500a76d0dc380a1bfb9ea77f18dfacdee4f49\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" podUID="baedb6b3-ef71-431f-aab8-2acd5b458e71" Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.748592 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" podUID="1ea0aa0e-1344-489a-9f52-8a677dfaba38" Jan 27 07:02:22 crc kubenswrapper[4796]: I0127 07:02:22.977253 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.977509 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:22 crc kubenswrapper[4796]: E0127 07:02:22.977792 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert podName:dd9ed078-9356-4d55-96d5-33039c9a0c68 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:26.97776826 +0000 UTC m=+968.084735587 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" (UID: "dd9ed078-9356-4d55-96d5-33039c9a0c68") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:23 crc kubenswrapper[4796]: I0127 07:02:23.283196 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:23 crc kubenswrapper[4796]: I0127 07:02:23.283247 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:23 crc kubenswrapper[4796]: E0127 07:02:23.283426 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:23 crc kubenswrapper[4796]: E0127 07:02:23.283471 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:27.2834584 +0000 UTC m=+968.390425727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "metrics-server-cert" not found Jan 27 07:02:23 crc kubenswrapper[4796]: E0127 07:02:23.283781 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:23 crc kubenswrapper[4796]: E0127 07:02:23.283813 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:27.283804617 +0000 UTC m=+968.390771944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "webhook-server-cert" not found Jan 27 07:02:26 crc kubenswrapper[4796]: I0127 07:02:26.642379 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:26 crc kubenswrapper[4796]: E0127 07:02:26.642528 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:26 crc kubenswrapper[4796]: E0127 07:02:26.642825 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert podName:789b993b-5e32-4cd6-811f-8aecbe093298 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:34.642809671 +0000 UTC m=+975.749776998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert") pod "infra-operator-controller-manager-7d75bc88d5-7l6kl" (UID: "789b993b-5e32-4cd6-811f-8aecbe093298") : secret "infra-operator-webhook-server-cert" not found Jan 27 07:02:27 crc kubenswrapper[4796]: I0127 07:02:27.048895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:27 crc kubenswrapper[4796]: E0127 07:02:27.049088 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:27 crc kubenswrapper[4796]: E0127 07:02:27.049151 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert podName:dd9ed078-9356-4d55-96d5-33039c9a0c68 nodeName:}" failed. No retries permitted until 2026-01-27 07:02:35.049133775 +0000 UTC m=+976.156101102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" (UID: "dd9ed078-9356-4d55-96d5-33039c9a0c68") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 27 07:02:27 crc kubenswrapper[4796]: I0127 07:02:27.352109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:27 crc kubenswrapper[4796]: I0127 07:02:27.352160 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:27 crc kubenswrapper[4796]: E0127 07:02:27.352341 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 27 07:02:27 crc kubenswrapper[4796]: E0127 07:02:27.352490 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:35.352466081 +0000 UTC m=+976.459433418 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "webhook-server-cert" not found Jan 27 07:02:27 crc kubenswrapper[4796]: E0127 07:02:27.352371 4796 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 27 07:02:27 crc kubenswrapper[4796]: E0127 07:02:27.352607 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs podName:214120a9-fff2-405b-badc-8dbe507546bd nodeName:}" failed. No retries permitted until 2026-01-27 07:02:35.352585964 +0000 UTC m=+976.459553301 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs") pod "openstack-operator-controller-manager-64d6b84b7b-4pfvn" (UID: "214120a9-fff2-405b-badc-8dbe507546bd") : secret "metrics-server-cert" not found Jan 27 07:02:34 crc kubenswrapper[4796]: E0127 07:02:34.316249 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f" Jan 27 07:02:34 crc kubenswrapper[4796]: E0127 07:02:34.316845 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8gt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7f54b7d6d4-gh6tm_openstack-operators(08457a1a-2d34-4d6f-8df1-c40f80daf96b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:34 crc kubenswrapper[4796]: E0127 07:02:34.317958 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" podUID="08457a1a-2d34-4d6f-8df1-c40f80daf96b" Jan 27 07:02:34 crc kubenswrapper[4796]: I0127 07:02:34.673868 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:34 crc kubenswrapper[4796]: I0127 07:02:34.683947 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/789b993b-5e32-4cd6-811f-8aecbe093298-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-7l6kl\" (UID: \"789b993b-5e32-4cd6-811f-8aecbe093298\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:34 crc kubenswrapper[4796]: I0127 07:02:34.757099 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kscj6" Jan 27 07:02:34 crc kubenswrapper[4796]: I0127 07:02:34.764138 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:34 crc kubenswrapper[4796]: E0127 07:02:34.841099 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" podUID="08457a1a-2d34-4d6f-8df1-c40f80daf96b" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.079268 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.083600 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd9ed078-9356-4d55-96d5-33039c9a0c68-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854v5n85\" (UID: \"dd9ed078-9356-4d55-96d5-33039c9a0c68\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.181127 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-xzt69" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.189868 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.382648 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.382694 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.388081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-metrics-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.388715 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/214120a9-fff2-405b-badc-8dbe507546bd-webhook-certs\") pod \"openstack-operator-controller-manager-64d6b84b7b-4pfvn\" (UID: \"214120a9-fff2-405b-badc-8dbe507546bd\") " pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:35 crc kubenswrapper[4796]: E0127 07:02:35.437341 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977" Jan 27 07:02:35 crc kubenswrapper[4796]: E0127 07:02:35.437521 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9q8xs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-65ff799cfd-k9hg7_openstack-operators(742577d6-bab7-4548-a7b2-84f562498a1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:35 crc kubenswrapper[4796]: E0127 07:02:35.438675 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" podUID="742577d6-bab7-4548-a7b2-84f562498a1c" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.528255 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rdp74" Jan 27 07:02:35 crc kubenswrapper[4796]: I0127 07:02:35.536824 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:35 crc kubenswrapper[4796]: E0127 07:02:35.846299 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/barbican-operator@sha256:44022a4042de334e1f04985eb102df0076ddbe3065e85b243a02a7c509952977\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" podUID="742577d6-bab7-4548-a7b2-84f562498a1c" Jan 27 07:02:35 crc kubenswrapper[4796]: E0127 07:02:35.990495 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 27 07:02:35 crc kubenswrapper[4796]: E0127 07:02:35.990762 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8dwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-rhfpn_openstack-operators(614461e3-64f7-4aa0-96a7-f8b31fefdbf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 27 07:02:35 crc kubenswrapper[4796]: E0127 07:02:35.992622 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" podUID="614461e3-64f7-4aa0-96a7-f8b31fefdbf1" Jan 27 07:02:36 crc kubenswrapper[4796]: I0127 07:02:36.321961 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl"] Jan 27 07:02:36 crc kubenswrapper[4796]: I0127 07:02:36.459401 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85"] Jan 27 07:02:36 crc kubenswrapper[4796]: W0127 07:02:36.483100 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd9ed078_9356_4d55_96d5_33039c9a0c68.slice/crio-c8a836acb79606b48549918b4a4a4c56fedc952b6a661fcc1261316456023cb4 WatchSource:0}: Error finding container c8a836acb79606b48549918b4a4a4c56fedc952b6a661fcc1261316456023cb4: Status 404 returned error can't find the container with id c8a836acb79606b48549918b4a4a4c56fedc952b6a661fcc1261316456023cb4 Jan 27 07:02:36 crc kubenswrapper[4796]: I0127 07:02:36.485465 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn"] Jan 27 07:02:36 crc kubenswrapper[4796]: W0127 07:02:36.496010 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod214120a9_fff2_405b_badc_8dbe507546bd.slice/crio-a4f4c8c84bd10bcd3dd07faf08dd179b4cec17beebe1b456a4f3d68f27a25a9b WatchSource:0}: Error finding container a4f4c8c84bd10bcd3dd07faf08dd179b4cec17beebe1b456a4f3d68f27a25a9b: Status 404 returned error can't find the container with id a4f4c8c84bd10bcd3dd07faf08dd179b4cec17beebe1b456a4f3d68f27a25a9b Jan 27 07:02:36 crc kubenswrapper[4796]: I0127 07:02:36.852220 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" event={"ID":"dd9ed078-9356-4d55-96d5-33039c9a0c68","Type":"ContainerStarted","Data":"c8a836acb79606b48549918b4a4a4c56fedc952b6a661fcc1261316456023cb4"} Jan 27 07:02:36 crc kubenswrapper[4796]: I0127 07:02:36.853386 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" event={"ID":"789b993b-5e32-4cd6-811f-8aecbe093298","Type":"ContainerStarted","Data":"0227eaac2468dfcee65060321a1964d55a8ff4c0b32da1f5883c2a2e77df725e"} Jan 27 07:02:36 crc kubenswrapper[4796]: I0127 07:02:36.854273 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" event={"ID":"214120a9-fff2-405b-badc-8dbe507546bd","Type":"ContainerStarted","Data":"a4f4c8c84bd10bcd3dd07faf08dd179b4cec17beebe1b456a4f3d68f27a25a9b"} Jan 27 07:02:36 crc kubenswrapper[4796]: E0127 07:02:36.855632 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" podUID="614461e3-64f7-4aa0-96a7-f8b31fefdbf1" Jan 27 07:02:40 crc kubenswrapper[4796]: I0127 07:02:40.881455 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" event={"ID":"bd76656d-eae6-4878-ae92-c27deac760ac","Type":"ContainerStarted","Data":"dbc84cf4cad3040f593f5c5c8a495d2a2f033daecaca9ab1fa9e69ecf505b5f9"} Jan 27 07:02:40 crc kubenswrapper[4796]: I0127 07:02:40.882941 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" event={"ID":"214120a9-fff2-405b-badc-8dbe507546bd","Type":"ContainerStarted","Data":"3e9e546d1d3c0970390d6c9a8accd7c36288bc34cc7fda4ccd784933472d9bc7"} Jan 27 07:02:40 crc kubenswrapper[4796]: I0127 07:02:40.884154 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" event={"ID":"013b66ef-a690-4075-9afc-9c9dd1822a3f","Type":"ContainerStarted","Data":"93a31b07df7ecdb302a60971ddecebd92b8e494f01fb5f75600cc37b842c9e9b"} Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.891575 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" event={"ID":"9aaf11e1-9a42-4ab2-a112-81b2d07e56d8","Type":"ContainerStarted","Data":"0e6c48c4bb35cc5762c4a394440ad2095602d018e574b352923c1895d4c0b9ed"} Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.893628 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" event={"ID":"af19b00f-4091-45da-b90b-15e1265c4239","Type":"ContainerStarted","Data":"b10b68208ba0aebe8826cd6f110ff7c142e65a1760cc9516ef04da155a755bd0"} Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.894794 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" event={"ID":"d0fc3b17-8640-4ad4-8794-890980c2cd92","Type":"ContainerStarted","Data":"b9b89a6c0f1450ac2148da302be5fd110f0c80947345444c4a026242ed567733"} Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.896077 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" event={"ID":"aa911582-096f-4b20-876c-c765de54b4fd","Type":"ContainerStarted","Data":"03d77fb82bf1e54f3278b15a35eefdf1d8002039b942359a8a54c98043f2178a"} Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.897263 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" event={"ID":"eebb0fb3-128f-4de7-afb9-9d0d2963b51e","Type":"ContainerStarted","Data":"539f3692a580e0b57830f6ea1b8b4bab51432f80559d4f5ca49a2cf752fcb72d"} Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.898589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" event={"ID":"649daee5-1c25-49bf-ade1-83c14d9603a3","Type":"ContainerStarted","Data":"1c81d3b9e926424a6783100797d3c4dc004da3d9a5fb7c52fa50a10b8de65308"} Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.899863 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" event={"ID":"7e4727d2-ec3b-4b51-bdc4-d4bf1f91c0ed","Type":"ContainerStarted","Data":"860ba3c9a4347aaf3de366875146a8cc6ae204b876c9a64caa3f6ae807fd0360"} Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.900008 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.918660 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" podStartSLOduration=7.625901152 podStartE2EDuration="22.918641432s" podCreationTimestamp="2026-01-27 07:02:19 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.675563353 +0000 UTC m=+961.782530690" lastFinishedPulling="2026-01-27 07:02:35.968303623 +0000 UTC m=+977.075270970" observedRunningTime="2026-01-27 07:02:41.914657381 +0000 UTC m=+983.021624708" watchObservedRunningTime="2026-01-27 07:02:41.918641432 +0000 UTC m=+983.025608759" Jan 27 07:02:41 crc kubenswrapper[4796]: I0127 07:02:41.952891 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" podStartSLOduration=22.952870829 podStartE2EDuration="22.952870829s" podCreationTimestamp="2026-01-27 07:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-27 07:02:41.945561931 +0000 UTC m=+983.052529258" watchObservedRunningTime="2026-01-27 07:02:41.952870829 +0000 UTC m=+983.059838156" Jan 27 07:02:42 crc kubenswrapper[4796]: I0127 07:02:42.908061 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" Jan 27 07:02:42 crc kubenswrapper[4796]: I0127 07:02:42.908358 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" Jan 27 07:02:42 crc kubenswrapper[4796]: I0127 07:02:42.930104 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" podStartSLOduration=8.638371306 podStartE2EDuration="23.930086124s" podCreationTimestamp="2026-01-27 07:02:19 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.689096353 +0000 UTC m=+961.796063680" lastFinishedPulling="2026-01-27 07:02:35.980811141 +0000 UTC m=+977.087778498" observedRunningTime="2026-01-27 07:02:42.923146694 +0000 UTC m=+984.030114021" watchObservedRunningTime="2026-01-27 07:02:42.930086124 +0000 UTC m=+984.037053451" Jan 27 07:02:42 crc kubenswrapper[4796]: I0127 07:02:42.951257 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" podStartSLOduration=8.681788815 podStartE2EDuration="23.95122272s" podCreationTimestamp="2026-01-27 07:02:19 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.680586148 +0000 UTC m=+961.787553505" lastFinishedPulling="2026-01-27 07:02:35.950020083 +0000 UTC m=+977.056987410" observedRunningTime="2026-01-27 07:02:42.937124645 +0000 UTC m=+984.044091982" watchObservedRunningTime="2026-01-27 07:02:42.95122272 +0000 UTC m=+984.058190047" Jan 27 07:02:43 crc kubenswrapper[4796]: I0127 07:02:43.915668 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" Jan 27 07:02:43 crc kubenswrapper[4796]: I0127 07:02:43.934928 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" podStartSLOduration=9.658397686 podStartE2EDuration="24.934905853s" podCreationTimestamp="2026-01-27 07:02:19 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.689836071 +0000 UTC m=+961.796803398" lastFinishedPulling="2026-01-27 07:02:35.966344248 +0000 UTC m=+977.073311565" observedRunningTime="2026-01-27 07:02:43.930274076 +0000 UTC m=+985.037241413" watchObservedRunningTime="2026-01-27 07:02:43.934905853 +0000 UTC m=+985.041873180" Jan 27 07:02:43 crc kubenswrapper[4796]: I0127 07:02:43.961444 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" podStartSLOduration=10.257169578 podStartE2EDuration="25.961414693s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.276559626 +0000 UTC m=+961.383526953" lastFinishedPulling="2026-01-27 07:02:35.980804701 +0000 UTC m=+977.087772068" observedRunningTime="2026-01-27 07:02:43.948820253 +0000 UTC m=+985.055787600" watchObservedRunningTime="2026-01-27 07:02:43.961414693 +0000 UTC m=+985.068382040" Jan 27 07:02:43 crc kubenswrapper[4796]: I0127 07:02:43.978283 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" podStartSLOduration=10.690689838 podStartE2EDuration="25.97824835s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.68068775 +0000 UTC m=+961.787655077" lastFinishedPulling="2026-01-27 07:02:35.968246252 +0000 UTC m=+977.075213589" observedRunningTime="2026-01-27 07:02:43.969769405 +0000 UTC m=+985.076736742" watchObservedRunningTime="2026-01-27 07:02:43.97824835 +0000 UTC m=+985.085215687" Jan 27 07:02:43 crc kubenswrapper[4796]: I0127 07:02:43.994465 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" podStartSLOduration=10.257710889 podStartE2EDuration="25.994444972s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.256681168 +0000 UTC m=+961.363648495" lastFinishedPulling="2026-01-27 07:02:35.993415241 +0000 UTC m=+977.100382578" observedRunningTime="2026-01-27 07:02:43.986971011 +0000 UTC m=+985.093938338" watchObservedRunningTime="2026-01-27 07:02:43.994444972 +0000 UTC m=+985.101412299" Jan 27 07:02:44 crc kubenswrapper[4796]: I0127 07:02:44.010627 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" podStartSLOduration=10.319533042 podStartE2EDuration="26.010603004s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.275938821 +0000 UTC m=+961.382906148" lastFinishedPulling="2026-01-27 07:02:35.967008783 +0000 UTC m=+977.073976110" observedRunningTime="2026-01-27 07:02:44.003081721 +0000 UTC m=+985.110049048" watchObservedRunningTime="2026-01-27 07:02:44.010603004 +0000 UTC m=+985.117570331" Jan 27 07:02:44 crc kubenswrapper[4796]: I0127 07:02:44.024972 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" podStartSLOduration=10.324432645 podStartE2EDuration="26.024948184s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.267793124 +0000 UTC m=+961.374760451" lastFinishedPulling="2026-01-27 07:02:35.968308623 +0000 UTC m=+977.075275990" observedRunningTime="2026-01-27 07:02:44.019067539 +0000 UTC m=+985.126034876" watchObservedRunningTime="2026-01-27 07:02:44.024948184 +0000 UTC m=+985.131915511" Jan 27 07:02:45 crc kubenswrapper[4796]: I0127 07:02:45.537368 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:45 crc kubenswrapper[4796]: I0127 07:02:45.542283 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-64d6b84b7b-4pfvn" Jan 27 07:02:48 crc kubenswrapper[4796]: I0127 07:02:48.997142 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" Jan 27 07:02:48 crc kubenswrapper[4796]: I0127 07:02:48.998471 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-4ksng" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.008007 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.010426 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-v7pbv" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.035880 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.039899 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-z2nvf" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.072161 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.074999 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-74866cc64d-fsxqp" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.107297 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.110401 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-q44zr" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.564416 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-mrhx6" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.590549 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-mcxtc" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.619819 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-dt7z6" Jan 27 07:02:49 crc kubenswrapper[4796]: I0127 07:02:49.652819 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-j7dc7" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.982452 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" event={"ID":"789b993b-5e32-4cd6-811f-8aecbe093298","Type":"ContainerStarted","Data":"d8594d1203c409507a79a50ad31d0e6a36c08130f1ed68013e54f8bb1f0ebbac"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.983098 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.984552 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" event={"ID":"777dc0b6-3bda-495c-835b-069590aa2c0f","Type":"ContainerStarted","Data":"5100a3c90278d4d3ba6ac1a51b192ce1c99e29466f321f8a91e1c5ac1c31cc65"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.986369 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" event={"ID":"d114853c-9a01-4565-b547-aaccd3ab9d26","Type":"ContainerStarted","Data":"04dc172acac5c93f0004c98a6fed8ab5ef658f15b660cf2140e298d35ee1b1c8"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.986583 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.987778 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" event={"ID":"baedb6b3-ef71-431f-aab8-2acd5b458e71","Type":"ContainerStarted","Data":"2c0c1b5c1a42281852c6a8093b97f226c2ead4cb96a6e1f562b66138eba1d2b4"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.987990 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.989484 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" event={"ID":"614461e3-64f7-4aa0-96a7-f8b31fefdbf1","Type":"ContainerStarted","Data":"e0a9e31efcae1cf768885a5ac4f3ff49b4e77b453dacfe91650d21bf73c7badd"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.989737 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.990856 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" event={"ID":"dd9ed078-9356-4d55-96d5-33039c9a0c68","Type":"ContainerStarted","Data":"31e0a9e1704796428b43ac86105cae93b6cf7c735fa957a11b3dd6421855dfca"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.991048 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.992282 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" event={"ID":"dde7d536-fd29-4887-82eb-41b2002bf874","Type":"ContainerStarted","Data":"8028efacc7c7d6f175c96fe364d27833bf57d10dad1ddae656c360605c4c2d77"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.992754 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.994283 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" event={"ID":"08457a1a-2d34-4d6f-8df1-c40f80daf96b","Type":"ContainerStarted","Data":"8ca892861f55b856258d1e6dac8d3b5b09187709ed2a6142a6d0b6e27eee4b4f"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.994811 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.996184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" event={"ID":"1ea0aa0e-1344-489a-9f52-8a677dfaba38","Type":"ContainerStarted","Data":"8e8ee74ed7ca46aa35c6c7949c51b94b93cfaa46bc0775d9962680b18e564f09"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.996798 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.998000 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" event={"ID":"742577d6-bab7-4548-a7b2-84f562498a1c","Type":"ContainerStarted","Data":"a90d4e857ce95f6e729773538b0b166f7a6cb46e8c18980fffec18ba7bda317e"} Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.998425 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" Jan 27 07:02:52 crc kubenswrapper[4796]: I0127 07:02:52.999900 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" event={"ID":"54314102-584a-4445-9477-7b089fabe859","Type":"ContainerStarted","Data":"c0bcbe83785de348c36cc04c4de45082470cf8ac75c68653def483bfe95b352d"} Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.000488 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.001749 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" event={"ID":"2882c56b-f839-4eb0-8b9c-9dce77a548ae","Type":"ContainerStarted","Data":"d742d944a408c9b9c55fcac08d19187cb1b0bb5debcafc94cd4e98cd53adf75a"} Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.002252 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.004560 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" event={"ID":"e68c56f0-76cf-4607-8235-ecd1445ba2de","Type":"ContainerStarted","Data":"f3b0e3e934fd88746f279c1d1dd350bb8566acfcc5c8ec1dfbc8e61be4da99ae"} Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.005747 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.199886 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" podStartSLOduration=19.105673408 podStartE2EDuration="34.199854243s" podCreationTimestamp="2026-01-27 07:02:19 +0000 UTC" firstStartedPulling="2026-01-27 07:02:36.485315243 +0000 UTC m=+977.592282570" lastFinishedPulling="2026-01-27 07:02:51.579496088 +0000 UTC m=+992.686463405" observedRunningTime="2026-01-27 07:02:53.199151077 +0000 UTC m=+994.306118474" watchObservedRunningTime="2026-01-27 07:02:53.199854243 +0000 UTC m=+994.306821570" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.202662 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" podStartSLOduration=20.03180514 podStartE2EDuration="35.202655948s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:36.40866298 +0000 UTC m=+977.515630307" lastFinishedPulling="2026-01-27 07:02:51.579513788 +0000 UTC m=+992.686481115" observedRunningTime="2026-01-27 07:02:53.077634943 +0000 UTC m=+994.184602270" watchObservedRunningTime="2026-01-27 07:02:53.202655948 +0000 UTC m=+994.309623275" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.236008 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjlcw" podStartSLOduration=2.903297938 podStartE2EDuration="34.235992415s" podCreationTimestamp="2026-01-27 07:02:19 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.778715024 +0000 UTC m=+961.885682351" lastFinishedPulling="2026-01-27 07:02:52.111409501 +0000 UTC m=+993.218376828" observedRunningTime="2026-01-27 07:02:53.235327369 +0000 UTC m=+994.342294686" watchObservedRunningTime="2026-01-27 07:02:53.235992415 +0000 UTC m=+994.342959742" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.279382 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" podStartSLOduration=3.9194513 podStartE2EDuration="35.279368762s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.73285971 +0000 UTC m=+961.839827037" lastFinishedPulling="2026-01-27 07:02:52.092777172 +0000 UTC m=+993.199744499" observedRunningTime="2026-01-27 07:02:53.275141235 +0000 UTC m=+994.382108562" watchObservedRunningTime="2026-01-27 07:02:53.279368762 +0000 UTC m=+994.386336089" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.328756 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" podStartSLOduration=3.926262616 podStartE2EDuration="35.328742377s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.709021492 +0000 UTC m=+961.815988819" lastFinishedPulling="2026-01-27 07:02:52.111501253 +0000 UTC m=+993.218468580" observedRunningTime="2026-01-27 07:02:53.326836644 +0000 UTC m=+994.433803971" watchObservedRunningTime="2026-01-27 07:02:53.328742377 +0000 UTC m=+994.435709704" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.369048 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" podStartSLOduration=3.532181672 podStartE2EDuration="35.369035484s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.27458364 +0000 UTC m=+961.381550967" lastFinishedPulling="2026-01-27 07:02:52.111437452 +0000 UTC m=+993.218404779" observedRunningTime="2026-01-27 07:02:53.366588128 +0000 UTC m=+994.473555445" watchObservedRunningTime="2026-01-27 07:02:53.369035484 +0000 UTC m=+994.476002811" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.391448 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" podStartSLOduration=3.9970103850000003 podStartE2EDuration="35.39143597s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.712190215 +0000 UTC m=+961.819157542" lastFinishedPulling="2026-01-27 07:02:52.1066158 +0000 UTC m=+993.213583127" observedRunningTime="2026-01-27 07:02:53.388958382 +0000 UTC m=+994.495925709" watchObservedRunningTime="2026-01-27 07:02:53.39143597 +0000 UTC m=+994.498403297" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.418191 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" podStartSLOduration=4.032623192 podStartE2EDuration="35.418175154s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.725835989 +0000 UTC m=+961.832803316" lastFinishedPulling="2026-01-27 07:02:52.111387951 +0000 UTC m=+993.218355278" observedRunningTime="2026-01-27 07:02:53.409690609 +0000 UTC m=+994.516657936" watchObservedRunningTime="2026-01-27 07:02:53.418175154 +0000 UTC m=+994.525142481" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.467733 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" podStartSLOduration=4.620688457 podStartE2EDuration="35.467718074s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.732653615 +0000 UTC m=+961.839620942" lastFinishedPulling="2026-01-27 07:02:51.579683222 +0000 UTC m=+992.686650559" observedRunningTime="2026-01-27 07:02:53.466073346 +0000 UTC m=+994.573040673" watchObservedRunningTime="2026-01-27 07:02:53.467718074 +0000 UTC m=+994.574685401" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.490917 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" podStartSLOduration=3.186111193 podStartE2EDuration="34.490896177s" podCreationTimestamp="2026-01-27 07:02:19 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.778420388 +0000 UTC m=+961.885387715" lastFinishedPulling="2026-01-27 07:02:52.083205372 +0000 UTC m=+993.190172699" observedRunningTime="2026-01-27 07:02:53.489629358 +0000 UTC m=+994.596596705" watchObservedRunningTime="2026-01-27 07:02:53.490896177 +0000 UTC m=+994.597863504" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.513080 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" podStartSLOduration=3.124910865 podStartE2EDuration="34.513062017s" podCreationTimestamp="2026-01-27 07:02:19 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.734904567 +0000 UTC m=+961.841871894" lastFinishedPulling="2026-01-27 07:02:52.123055719 +0000 UTC m=+993.230023046" observedRunningTime="2026-01-27 07:02:53.507096889 +0000 UTC m=+994.614064216" watchObservedRunningTime="2026-01-27 07:02:53.513062017 +0000 UTC m=+994.620029354" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.529853 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" podStartSLOduration=4.091451547 podStartE2EDuration="35.529833523s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.649014872 +0000 UTC m=+961.755982199" lastFinishedPulling="2026-01-27 07:02:52.087396828 +0000 UTC m=+993.194364175" observedRunningTime="2026-01-27 07:02:53.523138878 +0000 UTC m=+994.630106205" watchObservedRunningTime="2026-01-27 07:02:53.529833523 +0000 UTC m=+994.636800840" Jan 27 07:02:53 crc kubenswrapper[4796]: I0127 07:02:53.554638 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" podStartSLOduration=4.235009167 podStartE2EDuration="35.554621322s" podCreationTimestamp="2026-01-27 07:02:18 +0000 UTC" firstStartedPulling="2026-01-27 07:02:20.71155722 +0000 UTC m=+961.818524547" lastFinishedPulling="2026-01-27 07:02:52.031169385 +0000 UTC m=+993.138136702" observedRunningTime="2026-01-27 07:02:53.550016216 +0000 UTC m=+994.656983543" watchObservedRunningTime="2026-01-27 07:02:53.554621322 +0000 UTC m=+994.661588649" Jan 27 07:02:58 crc kubenswrapper[4796]: I0127 07:02:58.994213 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-65ff799cfd-k9hg7" Jan 27 07:02:59 crc kubenswrapper[4796]: I0127 07:02:59.094402 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-mqmpw" Jan 27 07:02:59 crc kubenswrapper[4796]: I0127 07:02:59.299668 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-rhfpn" Jan 27 07:02:59 crc kubenswrapper[4796]: I0127 07:02:59.309186 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-cpcgw" Jan 27 07:02:59 crc kubenswrapper[4796]: I0127 07:02:59.437914 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv" Jan 27 07:02:59 crc kubenswrapper[4796]: I0127 07:02:59.463489 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-m6r6w" Jan 27 07:02:59 crc kubenswrapper[4796]: I0127 07:02:59.508340 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-gh6tm" Jan 27 07:02:59 crc kubenswrapper[4796]: I0127 07:02:59.512322 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7875d7675-26p8v" Jan 27 07:02:59 crc kubenswrapper[4796]: I0127 07:02:59.855343 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-8lwww" Jan 27 07:02:59 crc kubenswrapper[4796]: I0127 07:02:59.889211 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75db85654f-7m7qw" Jan 27 07:03:03 crc kubenswrapper[4796]: I0127 07:03:03.788419 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:03:03 crc kubenswrapper[4796]: I0127 07:03:03.788674 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:03:04 crc kubenswrapper[4796]: I0127 07:03:04.774478 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-7l6kl" Jan 27 07:03:05 crc kubenswrapper[4796]: I0127 07:03:05.197354 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854v5n85" Jan 27 07:03:33 crc kubenswrapper[4796]: I0127 07:03:33.788123 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:03:33 crc kubenswrapper[4796]: I0127 07:03:33.788698 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.306499 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tm8kr/must-gather-zwd25"] Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.308164 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.310584 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tm8kr"/"openshift-service-ca.crt" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.310776 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tm8kr"/"default-dockercfg-6l2p8" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.311062 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tm8kr"/"kube-root-ca.crt" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.314218 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tm8kr/must-gather-zwd25"] Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.410651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11872b31-f60e-4d90-a428-964a03a45dcc-must-gather-output\") pod \"must-gather-zwd25\" (UID: \"11872b31-f60e-4d90-a428-964a03a45dcc\") " pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.410864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95jjj\" (UniqueName: \"kubernetes.io/projected/11872b31-f60e-4d90-a428-964a03a45dcc-kube-api-access-95jjj\") pod \"must-gather-zwd25\" (UID: \"11872b31-f60e-4d90-a428-964a03a45dcc\") " pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.513399 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95jjj\" (UniqueName: \"kubernetes.io/projected/11872b31-f60e-4d90-a428-964a03a45dcc-kube-api-access-95jjj\") pod \"must-gather-zwd25\" (UID: \"11872b31-f60e-4d90-a428-964a03a45dcc\") " pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.513566 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11872b31-f60e-4d90-a428-964a03a45dcc-must-gather-output\") pod \"must-gather-zwd25\" (UID: \"11872b31-f60e-4d90-a428-964a03a45dcc\") " pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.514033 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11872b31-f60e-4d90-a428-964a03a45dcc-must-gather-output\") pod \"must-gather-zwd25\" (UID: \"11872b31-f60e-4d90-a428-964a03a45dcc\") " pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.531814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95jjj\" (UniqueName: \"kubernetes.io/projected/11872b31-f60e-4d90-a428-964a03a45dcc-kube-api-access-95jjj\") pod \"must-gather-zwd25\" (UID: \"11872b31-f60e-4d90-a428-964a03a45dcc\") " pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:03:52 crc kubenswrapper[4796]: I0127 07:03:52.660880 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:03:53 crc kubenswrapper[4796]: I0127 07:03:53.141546 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tm8kr/must-gather-zwd25"] Jan 27 07:03:53 crc kubenswrapper[4796]: I0127 07:03:53.150796 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:03:53 crc kubenswrapper[4796]: I0127 07:03:53.470210 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tm8kr/must-gather-zwd25" event={"ID":"11872b31-f60e-4d90-a428-964a03a45dcc","Type":"ContainerStarted","Data":"2a3f195116d4ba6c9e0b4b2c46dcb7af8c1013ae23b0ce26eb2fd6f8efda26b5"} Jan 27 07:03:59 crc kubenswrapper[4796]: I0127 07:03:59.524457 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tm8kr/must-gather-zwd25" event={"ID":"11872b31-f60e-4d90-a428-964a03a45dcc","Type":"ContainerStarted","Data":"1c999a90d4cfae2e7388eccb9e3cff6b614d4790922888c2db64b2f09a58c282"} Jan 27 07:04:00 crc kubenswrapper[4796]: I0127 07:04:00.534139 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tm8kr/must-gather-zwd25" event={"ID":"11872b31-f60e-4d90-a428-964a03a45dcc","Type":"ContainerStarted","Data":"026c1194801852722b01b135c1ad1fed69234f66bf2023c9f3fc8aed69373b54"} Jan 27 07:04:00 crc kubenswrapper[4796]: I0127 07:04:00.562473 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tm8kr/must-gather-zwd25" podStartSLOduration=2.591413851 podStartE2EDuration="8.562449128s" podCreationTimestamp="2026-01-27 07:03:52 +0000 UTC" firstStartedPulling="2026-01-27 07:03:53.150438054 +0000 UTC m=+1054.257405381" lastFinishedPulling="2026-01-27 07:03:59.121473331 +0000 UTC m=+1060.228440658" observedRunningTime="2026-01-27 07:04:00.552878612 +0000 UTC m=+1061.659845969" watchObservedRunningTime="2026-01-27 07:04:00.562449128 +0000 UTC m=+1061.669416455" Jan 27 07:04:03 crc kubenswrapper[4796]: I0127 07:04:03.788521 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:04:03 crc kubenswrapper[4796]: I0127 07:04:03.788837 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:04:03 crc kubenswrapper[4796]: I0127 07:04:03.788880 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 07:04:03 crc kubenswrapper[4796]: I0127 07:04:03.789431 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5279cebf05cbd111eece26525bf47f8c49c929ff7313221c4f6857c9246306cd"} pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:04:03 crc kubenswrapper[4796]: I0127 07:04:03.789480 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" containerID="cri-o://5279cebf05cbd111eece26525bf47f8c49c929ff7313221c4f6857c9246306cd" gracePeriod=600 Jan 27 07:04:04 crc kubenswrapper[4796]: I0127 07:04:04.558677 4796 generic.go:334] "Generic (PLEG): container finished" podID="84d7512b-555d-440a-b817-deb8ba12f61d" containerID="5279cebf05cbd111eece26525bf47f8c49c929ff7313221c4f6857c9246306cd" exitCode=0 Jan 27 07:04:04 crc kubenswrapper[4796]: I0127 07:04:04.558759 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerDied","Data":"5279cebf05cbd111eece26525bf47f8c49c929ff7313221c4f6857c9246306cd"} Jan 27 07:04:04 crc kubenswrapper[4796]: I0127 07:04:04.559116 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"9d6ff2d1b646e667cc028cbd1ff4dea7c896e3e2292d3fb00ecbecdf6a7880b1"} Jan 27 07:04:04 crc kubenswrapper[4796]: I0127 07:04:04.559150 4796 scope.go:117] "RemoveContainer" containerID="2c94d4638721f045282b7bf6b0d1a7f76dc81b09a6581078ab42936dd6c8b456" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.103474 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5_e6c83c32-76cc-4afb-a71f-02a66a616302/util/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.323742 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5_e6c83c32-76cc-4afb-a71f-02a66a616302/util/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.356720 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5_e6c83c32-76cc-4afb-a71f-02a66a616302/pull/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.389822 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5_e6c83c32-76cc-4afb-a71f-02a66a616302/pull/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.550428 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5_e6c83c32-76cc-4afb-a71f-02a66a616302/pull/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.568963 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5_e6c83c32-76cc-4afb-a71f-02a66a616302/extract/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.588173 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5c99822263aa7a87433880ac191cc6512523cd2c81e588daea8ac94e96sbnz5_e6c83c32-76cc-4afb-a71f-02a66a616302/util/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.721747 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-65ff799cfd-k9hg7_742577d6-bab7-4548-a7b2-84f562498a1c/manager/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.761635 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-4ksng_aa911582-096f-4b20-876c-c765de54b4fd/manager/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.892294 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-v7pbv_9aaf11e1-9a42-4ab2-a112-81b2d07e56d8/manager/0.log" Jan 27 07:04:58 crc kubenswrapper[4796]: I0127 07:04:58.981411 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-z2nvf_eebb0fb3-128f-4de7-afb9-9d0d2963b51e/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.102175 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-74866cc64d-fsxqp_649daee5-1c25-49bf-ade1-83c14d9603a3/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.194986 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-mqmpw_1ea0aa0e-1344-489a-9f52-8a677dfaba38/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.258203 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-7l6kl_789b993b-5e32-4cd6-811f-8aecbe093298/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.373432 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-q44zr_af19b00f-4091-45da-b90b-15e1265c4239/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.445072 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-rhfpn_614461e3-64f7-4aa0-96a7-f8b31fefdbf1/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.540219 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-cpcgw_54314102-584a-4445-9477-7b089fabe859/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.597437 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-v6mzv_d114853c-9a01-4565-b547-aaccd3ab9d26/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.730842 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-m6r6w_2882c56b-f839-4eb0-8b9c-9dce77a548ae/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.830296 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f54b7d6d4-gh6tm_08457a1a-2d34-4d6f-8df1-c40f80daf96b/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.926403 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7875d7675-26p8v_baedb6b3-ef71-431f-aab8-2acd5b458e71/manager/0.log" Jan 27 07:04:59 crc kubenswrapper[4796]: I0127 07:04:59.984548 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854v5n85_dd9ed078-9356-4d55-96d5-33039c9a0c68/manager/0.log" Jan 27 07:05:00 crc kubenswrapper[4796]: I0127 07:05:00.207905 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-64d6b84b7b-4pfvn_214120a9-fff2-405b-badc-8dbe507546bd/manager/0.log" Jan 27 07:05:00 crc kubenswrapper[4796]: I0127 07:05:00.248083 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5c58fc478-n7ck7_b92c2b36-8c14-43e5-aa07-43aadfe3cda4/operator/0.log" Jan 27 07:05:00 crc kubenswrapper[4796]: I0127 07:05:00.348646 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2f5gg_507ccf39-7d25-425a-a0b0-8381ab9b7562/registry-server/0.log" Jan 27 07:05:00 crc kubenswrapper[4796]: I0127 07:05:00.424823 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-mrhx6_bd76656d-eae6-4878-ae92-c27deac760ac/manager/0.log" Jan 27 07:05:00 crc kubenswrapper[4796]: I0127 07:05:00.504810 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-mcxtc_013b66ef-a690-4075-9afc-9c9dd1822a3f/manager/0.log" Jan 27 07:05:00 crc kubenswrapper[4796]: I0127 07:05:00.651304 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xjlcw_777dc0b6-3bda-495c-835b-069590aa2c0f/operator/0.log" Jan 27 07:05:00 crc kubenswrapper[4796]: I0127 07:05:00.708043 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-j7dc7_d0fc3b17-8640-4ad4-8794-890980c2cd92/manager/0.log" Jan 27 07:05:00 crc kubenswrapper[4796]: I0127 07:05:00.948565 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-dt7z6_7e4727d2-ec3b-4b51-bdc4-d4bf1f91c0ed/manager/0.log" Jan 27 07:05:01 crc kubenswrapper[4796]: I0127 07:05:01.022605 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-8lwww_dde7d536-fd29-4887-82eb-41b2002bf874/manager/0.log" Jan 27 07:05:01 crc kubenswrapper[4796]: I0127 07:05:01.116654 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75db85654f-7m7qw_e68c56f0-76cf-4607-8235-ecd1445ba2de/manager/0.log" Jan 27 07:05:19 crc kubenswrapper[4796]: I0127 07:05:19.878793 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-pd9lg_4bb970b4-43c2-46e2-b707-20145a03a2bb/control-plane-machine-set-operator/0.log" Jan 27 07:05:20 crc kubenswrapper[4796]: I0127 07:05:20.034889 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-46nhj_68d73a51-598c-41e8-9064-3942bd4f93df/machine-api-operator/0.log" Jan 27 07:05:20 crc kubenswrapper[4796]: I0127 07:05:20.041408 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-46nhj_68d73a51-598c-41e8-9064-3942bd4f93df/kube-rbac-proxy/0.log" Jan 27 07:05:31 crc kubenswrapper[4796]: I0127 07:05:31.800767 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-c464t_0d54c657-ac07-4f66-afe0-8f6109670a43/cert-manager-controller/0.log" Jan 27 07:05:32 crc kubenswrapper[4796]: I0127 07:05:32.006342 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-nrqwp_6a87b8e8-dd35-4bb9-88cd-b41e445d785d/cert-manager-cainjector/0.log" Jan 27 07:05:32 crc kubenswrapper[4796]: I0127 07:05:32.050653 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-qwbkz_42a9a105-7972-4287-8bb6-203e3dfa1339/cert-manager-webhook/0.log" Jan 27 07:05:44 crc kubenswrapper[4796]: I0127 07:05:44.527577 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-7jd5j_879b02ea-4120-4fc0-9be3-2cdabfc554f2/nmstate-console-plugin/0.log" Jan 27 07:05:44 crc kubenswrapper[4796]: I0127 07:05:44.725676 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-j74n8_ac6cd9e3-211b-4aa3-9bf8-a76b68e6deb0/nmstate-handler/0.log" Jan 27 07:05:44 crc kubenswrapper[4796]: I0127 07:05:44.774752 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6qqbr_9d2d7f37-8d8f-4d7b-a77b-1769764774a3/kube-rbac-proxy/0.log" Jan 27 07:05:44 crc kubenswrapper[4796]: I0127 07:05:44.977890 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-6qqbr_9d2d7f37-8d8f-4d7b-a77b-1769764774a3/nmstate-metrics/0.log" Jan 27 07:05:45 crc kubenswrapper[4796]: I0127 07:05:45.107449 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-kvmjs_adb86a43-6bb2-4198-afe4-ca6484e020df/nmstate-operator/0.log" Jan 27 07:05:45 crc kubenswrapper[4796]: I0127 07:05:45.130797 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-bctnp_7b54d04c-747b-48f4-92b1-bcece1212861/nmstate-webhook/0.log" Jan 27 07:06:11 crc kubenswrapper[4796]: I0127 07:06:11.722665 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dlvdr_9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9/kube-rbac-proxy/0.log" Jan 27 07:06:11 crc kubenswrapper[4796]: I0127 07:06:11.772786 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dlvdr_9f97c3a8-70cb-4ee7-ab99-25430a2e3bc9/controller/0.log" Jan 27 07:06:11 crc kubenswrapper[4796]: I0127 07:06:11.893893 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-frr-files/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.057858 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-frr-files/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.094346 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-reloader/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.103521 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-reloader/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.117335 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-metrics/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.319957 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-frr-files/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.339214 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-reloader/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.340927 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-metrics/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.350232 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-metrics/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.524258 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-reloader/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.544870 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/controller/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.600801 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-frr-files/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.631941 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/cp-metrics/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.718010 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/frr-metrics/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.772371 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/frr/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.806614 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/kube-rbac-proxy-frr/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.806727 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/kube-rbac-proxy/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.907512 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rhh26_28376a2f-3d1a-455e-b2b1-2a71aa0c2b7c/reloader/0.log" Jan 27 07:06:12 crc kubenswrapper[4796]: I0127 07:06:12.987646 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-bkdpj_29719030-ae0e-4e4d-8932-9e7edc2f1b1f/frr-k8s-webhook-server/0.log" Jan 27 07:06:13 crc kubenswrapper[4796]: I0127 07:06:13.256211 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-549fb8c6d4-w79rq_95eec457-204e-4db3-8efe-973e0db5db30/manager/0.log" Jan 27 07:06:13 crc kubenswrapper[4796]: I0127 07:06:13.300570 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-8f79c48d5-5xncb_63f0d934-32d6-4577-9967-b1bfa261d72a/webhook-server/0.log" Jan 27 07:06:13 crc kubenswrapper[4796]: I0127 07:06:13.419816 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7dwl5_b831a3a4-784d-47c8-bb07-e4f40445f066/kube-rbac-proxy/0.log" Jan 27 07:06:13 crc kubenswrapper[4796]: I0127 07:06:13.574396 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7dwl5_b831a3a4-784d-47c8-bb07-e4f40445f066/speaker/0.log" Jan 27 07:06:26 crc kubenswrapper[4796]: I0127 07:06:26.938911 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5_7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab/util/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.077443 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5_7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab/pull/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.089515 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5_7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab/util/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.157170 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5_7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab/pull/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.274314 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5_7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab/util/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.274482 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5_7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab/pull/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.285524 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcgfdx5_7aed5efe-8cb6-4dc1-b17a-331b5bfd64ab/extract/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.428304 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7_14c08575-c4be-4c00-82e0-57ba579cd64e/util/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.597600 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7_14c08575-c4be-4c00-82e0-57ba579cd64e/pull/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.598439 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7_14c08575-c4be-4c00-82e0-57ba579cd64e/util/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.609468 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7_14c08575-c4be-4c00-82e0-57ba579cd64e/pull/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.759984 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7_14c08575-c4be-4c00-82e0-57ba579cd64e/extract/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.810726 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7_14c08575-c4be-4c00-82e0-57ba579cd64e/pull/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.811594 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713x2pb7_14c08575-c4be-4c00-82e0-57ba579cd64e/util/0.log" Jan 27 07:06:27 crc kubenswrapper[4796]: I0127 07:06:27.936790 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7gnq6_2edf3743-0387-4779-b77c-38f486e3eb2d/extract-utilities/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.084121 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7gnq6_2edf3743-0387-4779-b77c-38f486e3eb2d/extract-utilities/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.119268 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7gnq6_2edf3743-0387-4779-b77c-38f486e3eb2d/extract-content/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.146824 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7gnq6_2edf3743-0387-4779-b77c-38f486e3eb2d/extract-content/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.328705 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7gnq6_2edf3743-0387-4779-b77c-38f486e3eb2d/extract-content/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.329390 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7gnq6_2edf3743-0387-4779-b77c-38f486e3eb2d/extract-utilities/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.448660 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7gnq6_2edf3743-0387-4779-b77c-38f486e3eb2d/registry-server/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.518944 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9qlsp_e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb/extract-utilities/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.671722 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9qlsp_e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb/extract-content/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.704969 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9qlsp_e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb/extract-utilities/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.708582 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9qlsp_e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb/extract-content/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.857780 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9qlsp_e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb/extract-content/0.log" Jan 27 07:06:28 crc kubenswrapper[4796]: I0127 07:06:28.868476 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9qlsp_e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb/extract-utilities/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.059171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-8dx6h_e797ebfa-a82a-42f5-883f-68d70ae80e7f/marketplace-operator/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.081422 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2grzv_3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8/extract-utilities/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.201685 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9qlsp_e9e051cd-3b4f-4ce4-85fd-9fa61162e0bb/registry-server/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.335591 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2grzv_3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8/extract-utilities/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.357656 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2grzv_3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8/extract-content/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.384575 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2grzv_3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8/extract-content/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.520036 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2grzv_3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8/extract-content/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.526036 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2grzv_3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8/extract-utilities/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.622043 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2grzv_3dbdcaa5-dd05-4b9c-95a9-5bb30f2d09f8/registry-server/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.732810 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j2snq_e16325da-a1d2-4dd0-bd2c-f3b0c90db131/extract-utilities/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.854359 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j2snq_e16325da-a1d2-4dd0-bd2c-f3b0c90db131/extract-utilities/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.867830 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j2snq_e16325da-a1d2-4dd0-bd2c-f3b0c90db131/extract-content/0.log" Jan 27 07:06:29 crc kubenswrapper[4796]: I0127 07:06:29.890827 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j2snq_e16325da-a1d2-4dd0-bd2c-f3b0c90db131/extract-content/0.log" Jan 27 07:06:30 crc kubenswrapper[4796]: I0127 07:06:30.044672 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j2snq_e16325da-a1d2-4dd0-bd2c-f3b0c90db131/extract-utilities/0.log" Jan 27 07:06:30 crc kubenswrapper[4796]: I0127 07:06:30.052320 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j2snq_e16325da-a1d2-4dd0-bd2c-f3b0c90db131/extract-content/0.log" Jan 27 07:06:30 crc kubenswrapper[4796]: I0127 07:06:30.438495 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j2snq_e16325da-a1d2-4dd0-bd2c-f3b0c90db131/registry-server/0.log" Jan 27 07:06:33 crc kubenswrapper[4796]: I0127 07:06:33.788327 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:06:33 crc kubenswrapper[4796]: I0127 07:06:33.788834 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:07:03 crc kubenswrapper[4796]: I0127 07:07:03.787877 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:07:03 crc kubenswrapper[4796]: I0127 07:07:03.788517 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:07:33 crc kubenswrapper[4796]: I0127 07:07:33.788414 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:07:33 crc kubenswrapper[4796]: I0127 07:07:33.789081 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:07:33 crc kubenswrapper[4796]: I0127 07:07:33.789141 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 07:07:33 crc kubenswrapper[4796]: I0127 07:07:33.789833 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9d6ff2d1b646e667cc028cbd1ff4dea7c896e3e2292d3fb00ecbecdf6a7880b1"} pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:07:33 crc kubenswrapper[4796]: I0127 07:07:33.789892 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" containerID="cri-o://9d6ff2d1b646e667cc028cbd1ff4dea7c896e3e2292d3fb00ecbecdf6a7880b1" gracePeriod=600 Jan 27 07:07:34 crc kubenswrapper[4796]: I0127 07:07:34.134435 4796 generic.go:334] "Generic (PLEG): container finished" podID="84d7512b-555d-440a-b817-deb8ba12f61d" containerID="9d6ff2d1b646e667cc028cbd1ff4dea7c896e3e2292d3fb00ecbecdf6a7880b1" exitCode=0 Jan 27 07:07:34 crc kubenswrapper[4796]: I0127 07:07:34.134755 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerDied","Data":"9d6ff2d1b646e667cc028cbd1ff4dea7c896e3e2292d3fb00ecbecdf6a7880b1"} Jan 27 07:07:34 crc kubenswrapper[4796]: I0127 07:07:34.135075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"fd97f058415624dbb6034286295bb31089e7b13227a78c7f9b9089ec94201786"} Jan 27 07:07:34 crc kubenswrapper[4796]: I0127 07:07:34.135123 4796 scope.go:117] "RemoveContainer" containerID="5279cebf05cbd111eece26525bf47f8c49c929ff7313221c4f6857c9246306cd" Jan 27 07:07:39 crc kubenswrapper[4796]: I0127 07:07:39.183250 4796 generic.go:334] "Generic (PLEG): container finished" podID="11872b31-f60e-4d90-a428-964a03a45dcc" containerID="1c999a90d4cfae2e7388eccb9e3cff6b614d4790922888c2db64b2f09a58c282" exitCode=0 Jan 27 07:07:39 crc kubenswrapper[4796]: I0127 07:07:39.183432 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tm8kr/must-gather-zwd25" event={"ID":"11872b31-f60e-4d90-a428-964a03a45dcc","Type":"ContainerDied","Data":"1c999a90d4cfae2e7388eccb9e3cff6b614d4790922888c2db64b2f09a58c282"} Jan 27 07:07:39 crc kubenswrapper[4796]: I0127 07:07:39.184262 4796 scope.go:117] "RemoveContainer" containerID="1c999a90d4cfae2e7388eccb9e3cff6b614d4790922888c2db64b2f09a58c282" Jan 27 07:07:40 crc kubenswrapper[4796]: I0127 07:07:40.088121 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tm8kr_must-gather-zwd25_11872b31-f60e-4d90-a428-964a03a45dcc/gather/0.log" Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.017310 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tm8kr/must-gather-zwd25"] Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.018017 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tm8kr/must-gather-zwd25" podUID="11872b31-f60e-4d90-a428-964a03a45dcc" containerName="copy" containerID="cri-o://026c1194801852722b01b135c1ad1fed69234f66bf2023c9f3fc8aed69373b54" gracePeriod=2 Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.033597 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tm8kr/must-gather-zwd25"] Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.263208 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tm8kr_must-gather-zwd25_11872b31-f60e-4d90-a428-964a03a45dcc/copy/0.log" Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.263903 4796 generic.go:334] "Generic (PLEG): container finished" podID="11872b31-f60e-4d90-a428-964a03a45dcc" containerID="026c1194801852722b01b135c1ad1fed69234f66bf2023c9f3fc8aed69373b54" exitCode=143 Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.367669 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tm8kr_must-gather-zwd25_11872b31-f60e-4d90-a428-964a03a45dcc/copy/0.log" Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.368058 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.529422 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11872b31-f60e-4d90-a428-964a03a45dcc-must-gather-output\") pod \"11872b31-f60e-4d90-a428-964a03a45dcc\" (UID: \"11872b31-f60e-4d90-a428-964a03a45dcc\") " Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.529509 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95jjj\" (UniqueName: \"kubernetes.io/projected/11872b31-f60e-4d90-a428-964a03a45dcc-kube-api-access-95jjj\") pod \"11872b31-f60e-4d90-a428-964a03a45dcc\" (UID: \"11872b31-f60e-4d90-a428-964a03a45dcc\") " Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.539829 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11872b31-f60e-4d90-a428-964a03a45dcc-kube-api-access-95jjj" (OuterVolumeSpecName: "kube-api-access-95jjj") pod "11872b31-f60e-4d90-a428-964a03a45dcc" (UID: "11872b31-f60e-4d90-a428-964a03a45dcc"). InnerVolumeSpecName "kube-api-access-95jjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.622607 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11872b31-f60e-4d90-a428-964a03a45dcc-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "11872b31-f60e-4d90-a428-964a03a45dcc" (UID: "11872b31-f60e-4d90-a428-964a03a45dcc"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.630771 4796 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/11872b31-f60e-4d90-a428-964a03a45dcc-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 27 07:07:47 crc kubenswrapper[4796]: I0127 07:07:47.630820 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95jjj\" (UniqueName: \"kubernetes.io/projected/11872b31-f60e-4d90-a428-964a03a45dcc-kube-api-access-95jjj\") on node \"crc\" DevicePath \"\"" Jan 27 07:07:48 crc kubenswrapper[4796]: I0127 07:07:48.272141 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tm8kr_must-gather-zwd25_11872b31-f60e-4d90-a428-964a03a45dcc/copy/0.log" Jan 27 07:07:48 crc kubenswrapper[4796]: I0127 07:07:48.272668 4796 scope.go:117] "RemoveContainer" containerID="026c1194801852722b01b135c1ad1fed69234f66bf2023c9f3fc8aed69373b54" Jan 27 07:07:48 crc kubenswrapper[4796]: I0127 07:07:48.272715 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tm8kr/must-gather-zwd25" Jan 27 07:07:48 crc kubenswrapper[4796]: I0127 07:07:48.298591 4796 scope.go:117] "RemoveContainer" containerID="1c999a90d4cfae2e7388eccb9e3cff6b614d4790922888c2db64b2f09a58c282" Jan 27 07:07:48 crc kubenswrapper[4796]: I0127 07:07:48.761180 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11872b31-f60e-4d90-a428-964a03a45dcc" path="/var/lib/kubelet/pods/11872b31-f60e-4d90-a428-964a03a45dcc/volumes" Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.789474 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lmfkm"] Jan 27 07:09:13 crc kubenswrapper[4796]: E0127 07:09:13.790700 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11872b31-f60e-4d90-a428-964a03a45dcc" containerName="gather" Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.790723 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="11872b31-f60e-4d90-a428-964a03a45dcc" containerName="gather" Jan 27 07:09:13 crc kubenswrapper[4796]: E0127 07:09:13.790743 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11872b31-f60e-4d90-a428-964a03a45dcc" containerName="copy" Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.790756 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="11872b31-f60e-4d90-a428-964a03a45dcc" containerName="copy" Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.791037 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="11872b31-f60e-4d90-a428-964a03a45dcc" containerName="copy" Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.791070 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="11872b31-f60e-4d90-a428-964a03a45dcc" containerName="gather" Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.796259 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.809129 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmfkm"] Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.900207 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-utilities\") pod \"redhat-operators-lmfkm\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.901038 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp644\" (UniqueName: \"kubernetes.io/projected/7c027271-13f4-4156-8331-2f81a457593c-kube-api-access-pp644\") pod \"redhat-operators-lmfkm\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:13 crc kubenswrapper[4796]: I0127 07:09:13.901244 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-catalog-content\") pod \"redhat-operators-lmfkm\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.003051 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-utilities\") pod \"redhat-operators-lmfkm\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.003160 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp644\" (UniqueName: \"kubernetes.io/projected/7c027271-13f4-4156-8331-2f81a457593c-kube-api-access-pp644\") pod \"redhat-operators-lmfkm\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.003220 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-catalog-content\") pod \"redhat-operators-lmfkm\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.003764 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-utilities\") pod \"redhat-operators-lmfkm\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.003823 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-catalog-content\") pod \"redhat-operators-lmfkm\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.026812 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp644\" (UniqueName: \"kubernetes.io/projected/7c027271-13f4-4156-8331-2f81a457593c-kube-api-access-pp644\") pod \"redhat-operators-lmfkm\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.123832 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.583477 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmfkm"] Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.940360 4796 generic.go:334] "Generic (PLEG): container finished" podID="7c027271-13f4-4156-8331-2f81a457593c" containerID="700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a" exitCode=0 Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.942027 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmfkm" event={"ID":"7c027271-13f4-4156-8331-2f81a457593c","Type":"ContainerDied","Data":"700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a"} Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.942136 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmfkm" event={"ID":"7c027271-13f4-4156-8331-2f81a457593c","Type":"ContainerStarted","Data":"faf62b21010f16e2d5d987c47b2c09a2520520da69b92cd786b1abf0a2663ff7"} Jan 27 07:09:14 crc kubenswrapper[4796]: I0127 07:09:14.942824 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 27 07:09:15 crc kubenswrapper[4796]: I0127 07:09:15.950712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmfkm" event={"ID":"7c027271-13f4-4156-8331-2f81a457593c","Type":"ContainerStarted","Data":"91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7"} Jan 27 07:09:16 crc kubenswrapper[4796]: I0127 07:09:16.963818 4796 generic.go:334] "Generic (PLEG): container finished" podID="7c027271-13f4-4156-8331-2f81a457593c" containerID="91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7" exitCode=0 Jan 27 07:09:16 crc kubenswrapper[4796]: I0127 07:09:16.963859 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmfkm" event={"ID":"7c027271-13f4-4156-8331-2f81a457593c","Type":"ContainerDied","Data":"91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7"} Jan 27 07:09:17 crc kubenswrapper[4796]: I0127 07:09:17.974977 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmfkm" event={"ID":"7c027271-13f4-4156-8331-2f81a457593c","Type":"ContainerStarted","Data":"994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d"} Jan 27 07:09:18 crc kubenswrapper[4796]: I0127 07:09:18.003079 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lmfkm" podStartSLOduration=2.601125256 podStartE2EDuration="5.003060948s" podCreationTimestamp="2026-01-27 07:09:13 +0000 UTC" firstStartedPulling="2026-01-27 07:09:14.942638593 +0000 UTC m=+1376.049605920" lastFinishedPulling="2026-01-27 07:09:17.344574275 +0000 UTC m=+1378.451541612" observedRunningTime="2026-01-27 07:09:18.002317669 +0000 UTC m=+1379.109284996" watchObservedRunningTime="2026-01-27 07:09:18.003060948 +0000 UTC m=+1379.110028285" Jan 27 07:09:24 crc kubenswrapper[4796]: I0127 07:09:24.124713 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:24 crc kubenswrapper[4796]: I0127 07:09:24.125296 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:25 crc kubenswrapper[4796]: I0127 07:09:25.188977 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmfkm" podUID="7c027271-13f4-4156-8331-2f81a457593c" containerName="registry-server" probeResult="failure" output=< Jan 27 07:09:25 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Jan 27 07:09:25 crc kubenswrapper[4796]: > Jan 27 07:09:34 crc kubenswrapper[4796]: I0127 07:09:34.183498 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:34 crc kubenswrapper[4796]: I0127 07:09:34.240323 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:34 crc kubenswrapper[4796]: I0127 07:09:34.424143 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmfkm"] Jan 27 07:09:36 crc kubenswrapper[4796]: I0127 07:09:36.130328 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lmfkm" podUID="7c027271-13f4-4156-8331-2f81a457593c" containerName="registry-server" containerID="cri-o://994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d" gracePeriod=2 Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.098492 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.149900 4796 generic.go:334] "Generic (PLEG): container finished" podID="7c027271-13f4-4156-8331-2f81a457593c" containerID="994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d" exitCode=0 Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.149972 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmfkm" event={"ID":"7c027271-13f4-4156-8331-2f81a457593c","Type":"ContainerDied","Data":"994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d"} Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.150035 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmfkm" event={"ID":"7c027271-13f4-4156-8331-2f81a457593c","Type":"ContainerDied","Data":"faf62b21010f16e2d5d987c47b2c09a2520520da69b92cd786b1abf0a2663ff7"} Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.150050 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmfkm" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.150075 4796 scope.go:117] "RemoveContainer" containerID="994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.179632 4796 scope.go:117] "RemoveContainer" containerID="91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.202613 4796 scope.go:117] "RemoveContainer" containerID="700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.221408 4796 scope.go:117] "RemoveContainer" containerID="994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d" Jan 27 07:09:37 crc kubenswrapper[4796]: E0127 07:09:37.222338 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d\": container with ID starting with 994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d not found: ID does not exist" containerID="994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.222404 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d"} err="failed to get container status \"994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d\": rpc error: code = NotFound desc = could not find container \"994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d\": container with ID starting with 994fc9b3b1c14a6547b31213c73162e483e0bb32ca9971d1af95056cb4318c9d not found: ID does not exist" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.222435 4796 scope.go:117] "RemoveContainer" containerID="91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7" Jan 27 07:09:37 crc kubenswrapper[4796]: E0127 07:09:37.223021 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7\": container with ID starting with 91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7 not found: ID does not exist" containerID="91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.223102 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7"} err="failed to get container status \"91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7\": rpc error: code = NotFound desc = could not find container \"91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7\": container with ID starting with 91d5859c0a70acd720e30d31d39335a92ac05d5a70678b784713bc2de28a03e7 not found: ID does not exist" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.223181 4796 scope.go:117] "RemoveContainer" containerID="700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a" Jan 27 07:09:37 crc kubenswrapper[4796]: E0127 07:09:37.223840 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a\": container with ID starting with 700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a not found: ID does not exist" containerID="700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.223909 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a"} err="failed to get container status \"700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a\": rpc error: code = NotFound desc = could not find container \"700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a\": container with ID starting with 700d93e0ea037fc221417781561328500d05c8c48463c4bb8d85219bddad454a not found: ID does not exist" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.266840 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-utilities\") pod \"7c027271-13f4-4156-8331-2f81a457593c\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.266910 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-catalog-content\") pod \"7c027271-13f4-4156-8331-2f81a457593c\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.267124 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp644\" (UniqueName: \"kubernetes.io/projected/7c027271-13f4-4156-8331-2f81a457593c-kube-api-access-pp644\") pod \"7c027271-13f4-4156-8331-2f81a457593c\" (UID: \"7c027271-13f4-4156-8331-2f81a457593c\") " Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.268076 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-utilities" (OuterVolumeSpecName: "utilities") pod "7c027271-13f4-4156-8331-2f81a457593c" (UID: "7c027271-13f4-4156-8331-2f81a457593c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.268751 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.277204 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c027271-13f4-4156-8331-2f81a457593c-kube-api-access-pp644" (OuterVolumeSpecName: "kube-api-access-pp644") pod "7c027271-13f4-4156-8331-2f81a457593c" (UID: "7c027271-13f4-4156-8331-2f81a457593c"). InnerVolumeSpecName "kube-api-access-pp644". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.369827 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp644\" (UniqueName: \"kubernetes.io/projected/7c027271-13f4-4156-8331-2f81a457593c-kube-api-access-pp644\") on node \"crc\" DevicePath \"\"" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.383250 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c027271-13f4-4156-8331-2f81a457593c" (UID: "7c027271-13f4-4156-8331-2f81a457593c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.472934 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c027271-13f4-4156-8331-2f81a457593c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.480283 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmfkm"] Jan 27 07:09:37 crc kubenswrapper[4796]: I0127 07:09:37.493161 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lmfkm"] Jan 27 07:09:38 crc kubenswrapper[4796]: I0127 07:09:38.763280 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c027271-13f4-4156-8331-2f81a457593c" path="/var/lib/kubelet/pods/7c027271-13f4-4156-8331-2f81a457593c/volumes" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.714675 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s86t5"] Jan 27 07:10:01 crc kubenswrapper[4796]: E0127 07:10:01.716855 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c027271-13f4-4156-8331-2f81a457593c" containerName="extract-content" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.716902 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c027271-13f4-4156-8331-2f81a457593c" containerName="extract-content" Jan 27 07:10:01 crc kubenswrapper[4796]: E0127 07:10:01.716941 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c027271-13f4-4156-8331-2f81a457593c" containerName="registry-server" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.716950 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c027271-13f4-4156-8331-2f81a457593c" containerName="registry-server" Jan 27 07:10:01 crc kubenswrapper[4796]: E0127 07:10:01.716982 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c027271-13f4-4156-8331-2f81a457593c" containerName="extract-utilities" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.717018 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c027271-13f4-4156-8331-2f81a457593c" containerName="extract-utilities" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.717261 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c027271-13f4-4156-8331-2f81a457593c" containerName="registry-server" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.722408 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.736914 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s86t5"] Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.823177 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-catalog-content\") pod \"community-operators-s86t5\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.823607 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-utilities\") pod \"community-operators-s86t5\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.823798 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5z4\" (UniqueName: \"kubernetes.io/projected/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-kube-api-access-np5z4\") pod \"community-operators-s86t5\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.926414 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5z4\" (UniqueName: \"kubernetes.io/projected/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-kube-api-access-np5z4\") pod \"community-operators-s86t5\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.928849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-catalog-content\") pod \"community-operators-s86t5\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.928120 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-catalog-content\") pod \"community-operators-s86t5\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.929066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-utilities\") pod \"community-operators-s86t5\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.929576 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-utilities\") pod \"community-operators-s86t5\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:01 crc kubenswrapper[4796]: I0127 07:10:01.951791 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5z4\" (UniqueName: \"kubernetes.io/projected/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-kube-api-access-np5z4\") pod \"community-operators-s86t5\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:02 crc kubenswrapper[4796]: I0127 07:10:02.063014 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:02 crc kubenswrapper[4796]: I0127 07:10:02.608086 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s86t5"] Jan 27 07:10:03 crc kubenswrapper[4796]: I0127 07:10:03.376845 4796 generic.go:334] "Generic (PLEG): container finished" podID="bc2e16de-6aa3-46a2-8411-df8b9c7bf60d" containerID="504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577" exitCode=0 Jan 27 07:10:03 crc kubenswrapper[4796]: I0127 07:10:03.377339 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s86t5" event={"ID":"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d","Type":"ContainerDied","Data":"504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577"} Jan 27 07:10:03 crc kubenswrapper[4796]: I0127 07:10:03.377395 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s86t5" event={"ID":"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d","Type":"ContainerStarted","Data":"e66dac1b906c45b625b3598d63f235a76b047835a20858e5b6b0e4b46992015e"} Jan 27 07:10:03 crc kubenswrapper[4796]: I0127 07:10:03.788341 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:10:03 crc kubenswrapper[4796]: I0127 07:10:03.788727 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:10:04 crc kubenswrapper[4796]: I0127 07:10:04.388519 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s86t5" event={"ID":"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d","Type":"ContainerStarted","Data":"3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559"} Jan 27 07:10:05 crc kubenswrapper[4796]: I0127 07:10:05.406371 4796 generic.go:334] "Generic (PLEG): container finished" podID="bc2e16de-6aa3-46a2-8411-df8b9c7bf60d" containerID="3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559" exitCode=0 Jan 27 07:10:05 crc kubenswrapper[4796]: I0127 07:10:05.406428 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s86t5" event={"ID":"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d","Type":"ContainerDied","Data":"3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559"} Jan 27 07:10:06 crc kubenswrapper[4796]: I0127 07:10:06.416917 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s86t5" event={"ID":"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d","Type":"ContainerStarted","Data":"18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c"} Jan 27 07:10:06 crc kubenswrapper[4796]: I0127 07:10:06.438373 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s86t5" podStartSLOduration=3.021160359 podStartE2EDuration="5.438354081s" podCreationTimestamp="2026-01-27 07:10:01 +0000 UTC" firstStartedPulling="2026-01-27 07:10:03.381272196 +0000 UTC m=+1424.488239543" lastFinishedPulling="2026-01-27 07:10:05.798465908 +0000 UTC m=+1426.905433265" observedRunningTime="2026-01-27 07:10:06.437255174 +0000 UTC m=+1427.544222521" watchObservedRunningTime="2026-01-27 07:10:06.438354081 +0000 UTC m=+1427.545321408" Jan 27 07:10:12 crc kubenswrapper[4796]: I0127 07:10:12.063944 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:12 crc kubenswrapper[4796]: I0127 07:10:12.064308 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:12 crc kubenswrapper[4796]: I0127 07:10:12.115657 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:12 crc kubenswrapper[4796]: I0127 07:10:12.522797 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:12 crc kubenswrapper[4796]: I0127 07:10:12.626422 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s86t5"] Jan 27 07:10:14 crc kubenswrapper[4796]: I0127 07:10:14.473885 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s86t5" podUID="bc2e16de-6aa3-46a2-8411-df8b9c7bf60d" containerName="registry-server" containerID="cri-o://18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c" gracePeriod=2 Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.415842 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.484068 4796 generic.go:334] "Generic (PLEG): container finished" podID="bc2e16de-6aa3-46a2-8411-df8b9c7bf60d" containerID="18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c" exitCode=0 Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.484109 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s86t5" event={"ID":"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d","Type":"ContainerDied","Data":"18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c"} Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.484152 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s86t5" event={"ID":"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d","Type":"ContainerDied","Data":"e66dac1b906c45b625b3598d63f235a76b047835a20858e5b6b0e4b46992015e"} Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.484183 4796 scope.go:117] "RemoveContainer" containerID="18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.484328 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s86t5" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.506988 4796 scope.go:117] "RemoveContainer" containerID="3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.534242 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-utilities\") pod \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.534323 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np5z4\" (UniqueName: \"kubernetes.io/projected/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-kube-api-access-np5z4\") pod \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.534393 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-catalog-content\") pod \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\" (UID: \"bc2e16de-6aa3-46a2-8411-df8b9c7bf60d\") " Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.534669 4796 scope.go:117] "RemoveContainer" containerID="504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.535631 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-utilities" (OuterVolumeSpecName: "utilities") pod "bc2e16de-6aa3-46a2-8411-df8b9c7bf60d" (UID: "bc2e16de-6aa3-46a2-8411-df8b9c7bf60d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.541748 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-kube-api-access-np5z4" (OuterVolumeSpecName: "kube-api-access-np5z4") pod "bc2e16de-6aa3-46a2-8411-df8b9c7bf60d" (UID: "bc2e16de-6aa3-46a2-8411-df8b9c7bf60d"). InnerVolumeSpecName "kube-api-access-np5z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.580266 4796 scope.go:117] "RemoveContainer" containerID="18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c" Jan 27 07:10:15 crc kubenswrapper[4796]: E0127 07:10:15.580749 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c\": container with ID starting with 18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c not found: ID does not exist" containerID="18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.580784 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c"} err="failed to get container status \"18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c\": rpc error: code = NotFound desc = could not find container \"18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c\": container with ID starting with 18519d933cbe78b24ad54ee7cb045258b2b420820ea7a2b1e01ae85f4538b45c not found: ID does not exist" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.580930 4796 scope.go:117] "RemoveContainer" containerID="3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559" Jan 27 07:10:15 crc kubenswrapper[4796]: E0127 07:10:15.581240 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559\": container with ID starting with 3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559 not found: ID does not exist" containerID="3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.581266 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559"} err="failed to get container status \"3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559\": rpc error: code = NotFound desc = could not find container \"3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559\": container with ID starting with 3cd3d8a9357b726d311761fb048f52ecde0bab7e1cd301c3be1525ee36bac559 not found: ID does not exist" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.581280 4796 scope.go:117] "RemoveContainer" containerID="504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577" Jan 27 07:10:15 crc kubenswrapper[4796]: E0127 07:10:15.581496 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577\": container with ID starting with 504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577 not found: ID does not exist" containerID="504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.581517 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577"} err="failed to get container status \"504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577\": rpc error: code = NotFound desc = could not find container \"504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577\": container with ID starting with 504b82dd47cec802f2c50aa3aceedf9ebafda6ce933c4137df238b7102f43577 not found: ID does not exist" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.635680 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-utilities\") on node \"crc\" DevicePath \"\"" Jan 27 07:10:15 crc kubenswrapper[4796]: I0127 07:10:15.635729 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np5z4\" (UniqueName: \"kubernetes.io/projected/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-kube-api-access-np5z4\") on node \"crc\" DevicePath \"\"" Jan 27 07:10:16 crc kubenswrapper[4796]: I0127 07:10:16.149740 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bc2e16de-6aa3-46a2-8411-df8b9c7bf60d" (UID: "bc2e16de-6aa3-46a2-8411-df8b9c7bf60d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 27 07:10:16 crc kubenswrapper[4796]: I0127 07:10:16.246772 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 27 07:10:16 crc kubenswrapper[4796]: I0127 07:10:16.412835 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s86t5"] Jan 27 07:10:16 crc kubenswrapper[4796]: I0127 07:10:16.418922 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s86t5"] Jan 27 07:10:16 crc kubenswrapper[4796]: I0127 07:10:16.756142 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2e16de-6aa3-46a2-8411-df8b9c7bf60d" path="/var/lib/kubelet/pods/bc2e16de-6aa3-46a2-8411-df8b9c7bf60d/volumes" Jan 27 07:10:33 crc kubenswrapper[4796]: I0127 07:10:33.787841 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:10:33 crc kubenswrapper[4796]: I0127 07:10:33.788366 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:11:03 crc kubenswrapper[4796]: I0127 07:11:03.788817 4796 patch_prober.go:28] interesting pod/machine-config-daemon-qfqgm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 27 07:11:03 crc kubenswrapper[4796]: I0127 07:11:03.789408 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 27 07:11:03 crc kubenswrapper[4796]: I0127 07:11:03.789472 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" Jan 27 07:11:03 crc kubenswrapper[4796]: I0127 07:11:03.790389 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd97f058415624dbb6034286295bb31089e7b13227a78c7f9b9089ec94201786"} pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 27 07:11:03 crc kubenswrapper[4796]: I0127 07:11:03.790480 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" podUID="84d7512b-555d-440a-b817-deb8ba12f61d" containerName="machine-config-daemon" containerID="cri-o://fd97f058415624dbb6034286295bb31089e7b13227a78c7f9b9089ec94201786" gracePeriod=600 Jan 27 07:11:04 crc kubenswrapper[4796]: I0127 07:11:04.902845 4796 generic.go:334] "Generic (PLEG): container finished" podID="84d7512b-555d-440a-b817-deb8ba12f61d" containerID="fd97f058415624dbb6034286295bb31089e7b13227a78c7f9b9089ec94201786" exitCode=0 Jan 27 07:11:04 crc kubenswrapper[4796]: I0127 07:11:04.902923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerDied","Data":"fd97f058415624dbb6034286295bb31089e7b13227a78c7f9b9089ec94201786"} Jan 27 07:11:04 crc kubenswrapper[4796]: I0127 07:11:04.903667 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfqgm" event={"ID":"84d7512b-555d-440a-b817-deb8ba12f61d","Type":"ContainerStarted","Data":"9185e723f3dcd6dc281a613150729d1619d092006db37505be654c451e638609"} Jan 27 07:11:04 crc kubenswrapper[4796]: I0127 07:11:04.903696 4796 scope.go:117] "RemoveContainer" containerID="9d6ff2d1b646e667cc028cbd1ff4dea7c896e3e2292d3fb00ecbecdf6a7880b1" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136062425024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136062425017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136057140016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136057141015460 5ustar corecore